0% found this document useful (0 votes)
28 views76 pages

2021 2025 IT CSIT Final Yr BTech Project 24 25 Copy Copy

The project report focuses on the development of an intelligent system for helmet and number plate detection to enhance road safety and law enforcement. It aims to utilize advanced computer vision and deep learning techniques for real-time monitoring and compliance with safety regulations. The report outlines the project's objectives, methodologies, and the significance of implementing such a system in traffic management.

Uploaded by

Udit Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views76 pages

2021 2025 IT CSIT Final Yr BTech Project 24 25 Copy Copy

The project report focuses on the development of an intelligent system for helmet and number plate detection to enhance road safety and law enforcement. It aims to utilize advanced computer vision and deep learning techniques for real-time monitoring and compliance with safety regulations. The report outlines the project's objectives, methodologies, and the significance of implementing such a system in traffic management.

Uploaded by

Udit Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Project ID: 21-25/CSIT/G11

A PROJECT REPORT ON
Helmet and Number Plate Detection
Submitted
for the Partial fulfillment of award of
Bachelor of Technology
in
Computer Science And Information Technology
by
Pawan Kumar Singh (2200270119008)
Syed Md Sheeraz (2100270110086)
Udit Singh (2100270110089)

Under the guidance of


Mr. Madhup Agrawal

AJAY KUMAR GARG ENGINEERING COLLEGE,

GHAZIABAD

May 6, 2025
Declaration

We here by declare that the work presented in this report, entitled as


Helmet and Number Plate Detection, was carried out by us. We
have not submitted the matter embodied in this report for the award of
any other degree or diploma of any other University or Institute. We have
given due credit to the original authors and sources for all the words, ideas,
diagrams, graphics, computer programs, experiments, and results that are
not my original contribution. We have used quotation marks to identify
verbatim sentences and given credit to the original authors and sources.

We affirm that no portion of my work is plagiarized, and the experiments


and results reported in the report are not manipulated. In the event of
a complaint of plagiarism and the manipulation of the experiments and
results, We shall be fully responsible and answerable.

Name : Pawan Kumar Singh


Roll No. : 2200270119008

Name : Syed Md Sheeraz


Roll No. : 2100270110086

Name : Udit Singh


Roll No. : 2100270110089

i
Certificate
This is to certify that the report entitled Helmet and Number Plate
Detection submitted by Syed Md Sheeraz ( 2100270110086 ), Udit
Singh (2100270110089), Pawan Kumar Singh (2200270119008)
to the Dr. A. P. J. Abdul Kalam Technical University, Lucknow (U.P.)
in partial fulfillment of the requirements for the award of the Degree of
Bachelor of Technology in Computer Science and Information Technology
is a bonafide record of the project work carried out by him/her under
my/our guidance and supervision. This report in any form has not been
submitted to any other university or institute for any purpose, to the best
of my knowledge.

Mr. Madhup Agrawal Dr. Rahul Sharma


Assistant Professor Professor & HOD
Department of Information Department of Information
Technology Technology
Ajay Kumar Garg Ajay Kumar Garg
Engineering College Engineering College

Place: Ghaziabad
May 6, 2025

ii
Acknowledgements
We would like to express our thanks to all the people who have helped bring
this project to the stage of fulfillment. We would wish to put on record
very special thanks to my major project mentor, Assistant Professer
Mr. Madhup Agrawal, for the support, guidance, encouragement, and
some very valuable insight that she guided us in the entire process. Her
mentorship has been very pivotal in terms of shaping of our project and
leading us toward excellence.
We would like to appreciate our Head of the Department, Dr. Rahul
Sharma, who had provided us with the wherewithals and put us into an
environment that would bring out such innovation towards learning. We
would also want to appreciate our teachers and faculty members for all
that they share at this crucial juncture in our academic career. We wish
to appreciate many more for: helped out or, with their presence, indirectly
made contributions to this project.

iii
Contents

Declaration i

Certificate ii

Acknowledgements iii

List of Figures vi

1 Introduction 1
1.1 Problem Statement of Project . . . . . . . . . . . . . . . . . 1
1.2 Scope of Project . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Detail of Problem Domain . . . . . . . . . . . . . . . . . . . 6
1.4 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 System Requirements . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Project Report Outline . . . . . . . . . . . . . . . . . . . . . 10

2 Literature Review 11
2.1 Related Study . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Research Gaps . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Objective of Project . . . . . . . . . . . . . . . . . . . . . . 16

3 Methodology Used 21

4 Designing of Project 29
4.1 0-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 1-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 2-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . 35

5 Detailed Designing of Project 37


5.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 Entity-Relationship Diagram (ERD) . . . . . . . . . . . . . 39
5.3 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . 43

iv
6 Results and Discussions 45

7 Further Work 50

Bibliography 57

Appendix A 58

Appendix B 59

Appendix C 66

Appendix D 68

v
List of Figures

1.1 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . 8


3.1 Flow Diagram of Proposed Model . . . . . . . . . . . . . . 21
4.1 0-level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 1-level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 2-level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . 35
5.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2 ER Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.3 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Sequence Diagram . . . . . . . . . . . . . . . . . . . . . . . 43
6.1 Home Page . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2 Uploading Vehicle Images for Detection . . . . . . . . . . . 48
6.3 Detection Results Page . . . . . . . . . . . . . . . . . . . . . 49
7.1 Screenshot Of Database . . . . . . . . . . . . . . . . . . . . 58
7.2 SDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.3 Plagiarism Report . . . . . . . . . . . . . . . . . . . . . . . 68
7.4 AI Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

vi
Chapter 1

Introduction

1.1 Problem Statement of Project


Road safety is a critical issue, particularly in regions where two-wheelers
dominate daily traffic and non-compliance with helmet regulations leads
to a high number of injuries and fatalities. Helmets play a crucial role
in protecting riders from head trauma during accidents, yet enforcement
remains weak due to the limitations of manual surveillance and the high
cost of constant human monitoring. Similarly, the accurate recognition
of vehicle number plates is essential for law enforcement, traffic manage-
ment, toll collection, and tracking stolen or unregistered vehicles. However,
traditional methods often fail due to insufficient infrastructure and inef-
ficiencies in manual monitoring. Despite existing laws mandating helmet
use and number plate visibility, large-scale implementation is challenging
in densely populated areas with heavy traffic flow. These issues underline
the growing need for an intelligent, automated system capable of real-
time traffic monitoring to ensure compliance with safety regulations and
improve road discipline.

1
1.2 Scope of Project
• Helmet Detection:The primary objective of this project is to de-
velop an intelligent system capable of automatically detecting helmet
usage among motorcycle riders through image and video analysis.
The system is designed to process still images or real-time video feeds
to identify whether the motorcyclist is wearing a helmet. This de-
tection is carried out using advanced computer vision techniques and
deep learning algorithms that analyze visual cues such as the shape,
color, and position of objects on a rider’s head. The model is trained
to differentiate between helmeted and non-helmeted individuals even
in challenging scenarios involving motion blur, varying head angles,
partial occlusion, and diverse lighting conditions. This feature has
significant applications in traffic law enforcement, where it can be
used to identify violators automatically, reduce manual monitoring
efforts, and ultimately improve road safety by promoting compliance
with helmet laws.
• Number Plate Recognition: The project also aims to implement
an automatic number plate recognition (ANPR) system capable of
detecting and reading vehicle number plates from both static images
and real-time video feeds. Utilizing deep learning and optical charac-
ter recognition (OCR) techniques, the system identifies the location
of the number plate in the image, isolates it, and then decodes the
alphanumeric characters for further processing. It is designed to func-
tion effectively in diverse environmental conditions, including varia-
tions in lighting, plate orientation, font styles, and plate cleanliness.
This component is critical for enabling automated traffic monitoring
and enforcement, as it allows for the real-time identification of vehi-
cles violating traffic rules—such as riding without helmets, entering
restricted zones, or exceeding speed limits. Additionally, the sys-
tem can be integrated into broader intelligent transportation systems
(ITS) to assist in vehicle tracking, law enforcement investigations, and
toll collection.
• Preprocessing of Images:To enhance the accuracy and reliability
of helmet and number plate detection, the system incorporates a ro-
bust image preprocessing pipeline. This component is responsible for
preparing raw images and video frames for analysis by applying vari-
ous enhancement techniques. Preprocessing begins with noise reduc-

2
tion to eliminate visual distortions such as grain, blur, or compression
artifacts, which can negatively impact detection performance. Tech-
niques such as Gaussian filtering or median blurring may be employed
to clean the image. Image enhancement methods—including con-
trast adjustment, histogram equalization, and sharpening—are then
used to improve the visibility of key features, particularly under poor
lighting or weather conditions. Normalization is also applied to stan-
dardize pixel intensity values, ensuring uniformity across inputs for
consistent model behavior.
• Feature Extraction: Feature extraction plays a vital role in en-
hancing the system’s ability to accurately detect and recognize hel-
mets and number plates. In this phase, the system identifies and
isolates critical visual elements from the input images, such as con-
tours, edges, textures, and specific regions of interest (ROIs) that
correspond to helmets or license plates. These features are essential
for distinguishing between relevant and irrelevant parts of an image,
especially in complex scenes involving multiple vehicles or background
distractions. Advanced algorithms like edge detection (e.g., Canny or
Sobel), histogram of oriented gradients (HOG), or deep feature maps
from convolutional layers are utilized to extract meaningful patterns.
For helmet detection, shape and position features help differentiate
helmets from other headgear or obstructions. In the case of number
plate recognition, feature extraction focuses on the rectangular plate
region, character spacing, and font style to ensure precise recognition
during OCR processing. .
• Helmet and Number Plate Detection Algorithm: The core
functionality of the project lies in the implementation of advanced
machine learning and deep learning algorithms, particularly Convo-
lutional Neural Networks (CNNs), to enable accurate and efficient de-
tection of helmets and number plates. CNNs are well-suited for image
recognition tasks due to their ability to automatically learn spatial hi-
erarchies of features through backpropagation. In this system, CNN-
based architectures like YOLO (You Only Look Once) are utilized to
perform real-time object detection by identifying the location (bound-
ing boxes) and class (helmet or number plate) of objects within video
frames. The model processes each frame to detect whether a mo-
torcycle rider is wearing a helmet and simultaneously localizes and
recognizes the alphanumeric content of the vehicle’s number plate.

3
The system is trained on annotated datasets using supervised learn-
ing, where labeled images guide the network in learning discriminative
features.
• Real-time Processing:A critical objective of the system is its ability
to process video streams in real-time, which is essential for practical
applications such as traffic surveillance and law enforcement. The sys-
tem is designed to detect and recognize helmets and number plates
instantly as vehicles move through the camera’s field of view. This
involves optimizing the detection algorithms to achieve low latency
and high throughput, ensuring minimal delay between input and out-
put. Real-time processing demands efficient model inference, parallel
processing capabilities, and the use of hardware accelerators such as
GPUs or TPUs when necessary. The integration of high-speed data
handling, frame-by-frame analysis, and rapid decision-making enables
the system to function reliably even in busy traffic scenarios. Addi-
tionally, the real-time capability allows the system to be proactive,
enabling authorities to respond to violations immediately and main-
tain continuous monitoring without manual intervention.
• Performance Evaluation: Evaluating the effectiveness of the pro-
posed system is essential to ensure its reliability and practical appli-
cability. The system’s performance will be assessed using standard
metrics such as accuracy, precision, recall, and F1 score. These met-
rics provide a comprehensive understanding of how well the model
performs in detecting helmets and recognizing number plates. Accu-
racy reflects the overall correctness of the model’s predictions, while
precision indicates the proportion of true positive predictions among
all positive predictions made. Recall measures the system’s ability
to correctly identify all relevant instances (e.g., all riders wearing or
not wearing helmets), and the F1 score serves as a harmonic mean of
precision and recall, offering a balanced evaluation metric, especially
in the case of imbalanced datasets. By applying these evaluation
metrics to results from both controlled tests and real-world scenar-
ios, the robustness and generalization capabilities of the system can
be validated. This step also helps identify any shortcomings or areas
for improvement, such as performance under varying environmental
conditions or with different camera resolutions.
• Application in Traffic Monitoring:One of the primary goals of

4
the project is to explore the real-world applicability of the system in
traffic monitoring and enforcement scenarios. The helmet and number
plate detection system is designed to be integrated into existing traffic
surveillance infrastructure to enhance road safety and ensure compli-
ance with helmet usage laws. By automatically detecting whether
motorcycle riders are wearing helmets and identifying vehicle number
plates in real-time, the system can assist traffic authorities in enforc-
ing regulations more efficiently and effectively. It reduces the need
for manual monitoring by law enforcement personnel, thereby saving
time and reducing human error. Moreover, the system can be used
for collecting data-driven insights into rider behavior, frequency of vi-
olations, and traffic flow patterns. This data can inform future road
safety policies and help in deploying targeted awareness campaigns
or enforcement drives. The application of such an automated system
has the potential to significantly improve urban traffic management,
reduce accidents, and promote safer riding practices.
• Optimization for Edge Devices: To ensure the practical deploy-
ment and scalability of the proposed system, special emphasis is
placed on optimizing it for edge devices such as traffic surveillance
cameras and embedded processing units. Edge devices are typically
limited in computational power and memory compared to centralized
servers, so the system must be lightweight, efficient, and capable of
running autonomously in real-time. Optimization involves techniques
such as model pruning to reduce the number of unnecessary param-
eters, quantization to lower the precision of computations without
significantly affecting accuracy, and the use of compact yet effective
neural network architectures like YOLOv5-nano or MobileNet. These
adjustments allow the system to maintain high-speed inference and
reliable performance under the resource constraints of edge hardware.
Furthermore, by enabling real-time detection directly on edge devices,
the system minimizes latency, reduces dependency on constant inter-
net connectivity, and ensures immediate response in dynamic traffic
environments. This facilitates seamless integration into smart city in-
frastructure and enhances the autonomy and responsiveness of traffic
monitoring systems.

5
1.3 Detail of Problem Domain
Motorcycle accidents represent a significant and growing concern on roads
globally, with one of the primary contributors being the non-compliance
of riders with helmet laws. Helmets are a proven life-saving device that
substantially reduces the risk of fatality or serious injury in the event of
an accident. Despite the well-documented safety benefits of wearing hel-
mets, a large number of motorcycle riders continue to disregard helmet
laws. This non-compliance not only poses a direct threat to the riders but
also burdens public health systems, contributes to traffic congestion, and
results in a substantial economic loss. Ensuring that riders wear helmets is
thus a critical factor in improving road safety and reducing the frequency
and severity of accidents.
However, traditional methods for enforcing helmet laws—such as manual
checks by law enforcement officers—are both inefficient and prone to er-
ror. These methods often rely on random checks or specific checkpoints,
which can be time-consuming, labor-intensive, and dependent on the visi-
bility of the rider’s helmet. In addition, such methods are reactive rather
than proactive, meaning that helmet violations may not be detected until
an accident occurs. Moreover, human error during manual checks, due
to fatigue, distractions, or other factors, can result in missed violations,
undermining the effectiveness of this approach. The inefficiency of manual
checks calls for an automated solution that can detect helmet usage in real
time, without requiring direct intervention from officers, thus improving
road safety enforcement and reducing the workload on law enforcement
agencies.
Alongside the helmet compliance issue, the identification of vehicles through
their number plates also presents a significant challenge in traffic mon-
itoring and law enforcement. Manual number plate recognition involves
lengthy processes such as physically stopping vehicles and manually check-
ing documents or entering the number plate into a database, which is slow
and inefficient. Additionally, human-based checks for violations such as
speeding, insurance compliance, or stolen vehicles require substantial time
and resources. The need for an efficient, automated system that can iden-
tify vehicle number plates quickly and accurately is becoming increasingly
urgent.

The core problem, therefore, lies in the need for an automated and ef-
ficient solution capable of real-time detection of helmet usage and

6
vehicle number plates from video feeds. With the advent of computer
vision and machine learning technologies, it is now feasible to address
this challenge through image processing algorithms that can detect
whether a rider is wearing a helmet and recognize the number plate of the
vehicle. Such a system can be integrated into existing traffic monitor-
ing infrastructure, improving the overall efficiency of enforcement and
ensuring that violations are detected automatically, without burdening law
enforcement officers or requiring human intervention.
For this solution to be effective, it must overcome several challenges. The
primary challenge is ensuring accurate detection of helmets and num-
ber plates under various environmental conditions, such as differ-
ent lighting (e.g., night vs. day), weather (e.g., fog, rain), and angles (e.g.,
rider position, vehicle speed). The system must be robust enough to
handle these variations, which requires advanced image preprocessing
techniques, effective feature extraction methods, and the implementa-
tion of machine learning models that can learn and adapt to a variety
of scenarios. Additionally, the system must be real-time to provide im-
mediate feedback and enforcement, requiring a high degree of optimization
to ensure that the detection algorithms operate swiftly while maintaining
accuracy.
The integration of such a system would not only enhance traffic safety
by ensuring compliance with helmet and number plate laws but also help
reduce law enforcement workload by automating previously manual
tasks. By detecting helmets and number plates automatically from video
feeds, the system could be seamlessly integrated into traffic surveillance
systems, ensuring faster, more accurate, and continuous monitoring. This
would lead to more efficient law enforcement, improved road safety.

7
1.4 Gantt Chart
A Gantt chart is a visual project management tool that displays tasks, their
duration, and dependencies on a timeline. It helps in tracking progress,
scheduling tasks, and improving team coordination.
A Gantt chart is a widely used visual project management tool that repre-
sents the project schedule by displaying tasks, their respective durations,
start and end dates, and dependencies among them along a horizontal time
axis. It provides a comprehensive overview of the entire project timeline,
allowing both individual contributors and teams to track the project’s
progress in a structured manner. In this project, the Gantt chart played
a vital role in organizing the different stages of helmet and number plate
detection, ensuring that all tasks were logically sequenced and deadlines
were adhered to.

Figure 1.1: Gantt Chart

8
1.5 System Requirements

1. Hardware Requirement:
Processor: Intel i7 or higher / AMD Ryzen 7 (or equivalent)
RAM: Minimum 16 GB (32 GB recommended for faster processing)
GPU: NVIDIA GTX 1080 or higher (for deep learning models)
Storage: SSD with at least 512 GB (to store images and models)
Display: High-resolution monitor (for image visualization) Additional
Devices: High-speed internet, external storage, and cooling systems for
GPU-intensive tasks.

2. Software Requirement:
Operating System: Windows 10/11, Ubuntu (Linux), or macOS
Programming Language: Python (preferred)
Libraries/Frameworks: TensorFlow / PyTorch (for model training)
OpenCV (image processing) ,NumPy, Pandas, Matplotlib (data analysis
and visualization)
Development Environment: Jupyter Notebook, PyCharm, or VS Code

3. Network Requirement:
A stable internet connection (minimum 10 Mbps) is required for model
training, data transfer, and cloud services access.

9
1.6 Project Report Outline
Chapter 2 Literature Review This chapter reviews existing research on
helmet detection and number plate recognition, discussing the methods,
techniques, and algorithms used in prior studies, along with their results
and limitations. It critically evaluates the gaps in current approaches and
demonstrates how this research builds on and contributes to the existing
body of knowledge.
Chapter 3 Proposed Model The proposed model outlines the key steps
for helmet and number plate detection, including image acquisition, pre-
processing (noise reduction and enhancement), segmentation (isolating rel-
evant features), feature extraction (identifying key aspects), and classifi-
cation (using machine learning algorithms to classify the objects).
Chapter 4 System Design This chapter provides a high-level overview
of the system design, illustrating how the system components interact
through data flow and use case diagrams. It highlights the functionalities
of the project and the flow of data within the system.
Chapter 5 Detailed System Design This chapter delves into the sys-
tem’s architecture, presenting detailed diagrams such as the class dia-
gram, entity relationship diagram, activity diagram, and sequence dia-
gram, which provide a deeper understanding of the system’s structure and
functionality.
Chapter 6 Results and Discussions This chapter presents the results
of the system, including accuracy and sensitivity metrics, and compares
them with existing models. It also includes visual outputs of the detection
process and discusses improvements and potential future enhancements of
the system.

10
Chapter 2

Literature Review

2.1 Related Study


Chidananda K.et.al/2023 [1]: This research utilized the YOLO V5 deep
learning model for real-time object detection using a webcam. While the
method demonstrated high efficiency in object recognition, the accuracy
was affected by factors like camera quality and environmental conditions,
such as lighting and distance. These factors posed challenges in achiev-
ing precise detection in suboptimal settings. The study emphasizes the
promise of YOLO V5 for real-time tasks but highlights the need for high-
quality hardware and refined models to handle such challenges effectively.

J. Reddy Pasam et al./2022 [2]: This study focused on implementing sys-


tems for detecting helmets and license plates using deep learning tech-
niques. The system proved to be accurate in recognizing helmets and
license plates, which is beneficial for traffic enforcement and monitoring.
However, the research noted that the incomplete dataset for helmet de-
tection presented some limitations, which reduced the model’s general-
izability. Increasing the dataset size and diversity could lead to better
performance.

B. Amoolya et al./2021 [3]: This study explored deep learning methods


for both helmet detection and license plate recognition. The models ex-
hibited strong accuracy in detecting helmets and license plates, especially
in controlled settings. However, the performance deteriorated in busy or
dynamically changing environments, where object overlap and inconsistent
lighting affected detection. This highlights the need for improving model
adaptability to real-world conditions.

11
A. R. et al./2021 [4]: The authors conducted a comprehensive survey of hel-
met detection and number plate recognition techniques. The study offered
in-depth insights into various approaches and their potential applications.
However, it was more theoretical and lacked practical implementation, lim-
iting its usefulness for real-world deployment. Future work could integrate
practical validation with theoretical research.

Kulkarni P. S et al./2021 [5]: This research explored the use of deep


learning, particularly CNNs and YOLO, for real-time helmet detection.
The study demonstrated that real-time processing with good accuracy
is achievable. However, the performance was notably affected by poor
lighting conditions, indicating the need for model enhancements to handle
variable lighting and environmental conditions. This research underscores
the importance of optimizing models for consistent usability.

Ghazali R et al./2021 [6]: The study applied YOLOv3 for vehicle detection
in smart city applications, with a focus on real-time monitoring. The
system achieved strong accuracy in vehicle detection, which is essential
for intelligent traffic systems. However, the real-time processing required
substantial computational resources, making it difficult to deploy on low-
cost hardware. Optimizing the computational demands could enhance its
scalability.

Wu Y et al./2021 [7]: This research reviewed various deep learning applica-


tions in intelligent transportation systems (ITS), outlining the benefits of
these technologies. While it demonstrated the potential of these systems,
it lacked detailed solutions for real-time traffic monitoring and helmet de-
tection. Addressing this gap would make the research more practical for
implementation.

R. Roy et al./2021 [8]: The study implemented machine learning tech-


niques for helmet and license plate recognition. It showed effective detec-
tion in controlled environments, making it suitable for specific applications.
However, challenges arose when dealing with multiple objects, as overlap-
ping or occluded items degraded performance. Future work could focus on
enhancing the system’s ability to handle complex scenarios.

Ahmad S et al./2021 [9]: This study combined YOLOv3 and OpenCV for
automatic license plate recognition. The system performed efficiently and

12
accurately in favorable conditions, but its accuracy decreased significantly
when dealing with noisy or partially obscured license plates. Advanced
preprocessing techniques are needed to improve performance under such
conditions.

13
Table 2.1: Literature Review

14
2.2 Research Gaps
When developing a helmet and number plate detection model, there are
several research gaps and challenges that can be explored in the context
of both technologies and their real-world applications. Below are some
potential research gaps for improving such a model:
1. Camera and Environmental Limitations: Many studies utiliz-
ing object detection algorithms, such as YOLO, face challenges in
suboptimal environments. Variations in lighting, camera quality, and
object distance significantly impact the accuracy of real-time detec-
tion models.
2. Dataset Diversity: Existing research often relies on incomplete or
limited datasets, particularly for specific applications like helmet de-
tection. This reduces model robustness and generalizability across
diverse real-world scenarios.
3. Performance in Dynamic Environments: Models exhibit re-
duced efficiency in crowded or dynamically changing environments
due to overlapping objects, occlusions, and inconsistent lighting con-
ditions. Enhanced adaptability to such conditions is necessary.
4. Computational Requirements: Deep learning models like YOLOv3
and YOLOv5 demonstrate high accuracy but require substantial com-
putational power. This limits their scalability and deployment on
low-cost or resource-constrained hardware.
5. Angle and Quality of Inputs: License plate recognition systems
show performance degradation when dealing with low-quality images
or plates captured at varying angles. Advanced preprocessing tech-
niques are needed to address these issues.
6. Theoretical vs. Practical Validation: Some research focuses
heavily on theoretical frameworks without sufficient practical imple-
mentation, limiting the real-world applicability of findings.
7. Real-Time Traffic Monitoring: While intelligent transportation
systems leverage deep learning, there is a lack of specific, actionable
solutions for real-time traffic monitoring and compliance detection.
8. Multi-Object Detection: Existing approaches struggle with de-
tecting and distinguishing between multiple objects, especially when
there are occlusions or overlapping features.

15
2.3 Objective of Project
The primary objectives of this project are as follows:
1. Real-Time Object Detection: Real-time object detection is a
critical task in numerous applications, including traffic monitoring,
surveillance, and autonomous vehicles. The goal is to develop a sys-
tem capable of accurately identifying and localizing objects in video
frames as they are captured, with minimal delay. To achieve this,
state-of-the-art deep learning algorithms, such as YOLO (You Only
Look Once), have been widely adopted due to their speed and ac-
curacy. YOLO is particularly well-suited for real-time applications
because of its ability to predict multiple bounding boxes and class la-
bels in a single pass through the network, making it much faster than
traditional object detection methods that rely on separate stages for
region proposal, feature extraction, and classification.
In this project, YOLO is utilized to detect both helmets and number
plates in real-time video feeds from traffic surveillance systems. The
model’s architecture, based on Convolutional Neural Networks
(CNNs), allows it to efficiently process image data by learning to
recognize complex patterns and features directly from raw pixel val-
ues. The speed of YOLO is achieved by using a single neural network
to predict both the location (bounding boxes) and the class (helmet,
vehicle, number plate) of the objects simultaneously. This approach
significantly reduces the computational cost compared to older, multi-
stage detection models
2. Addressing Environmental Challenges: In real-time object de-
tection, the ability to maintain high accuracy across a wide range of
environmental conditions is crucial for ensuring the robustness and re-
liability of the system. Environmental factors such as lighting vari-
ations, distance from the object, camera quality, and weather
conditions can significantly impact the performance of object de-
tection algorithms. Therefore, addressing these challenges is a key
component in developing a reliable system for applications such as
traffic monitoring and surveillance.
One of the primary environmental challenges is lighting, which can
vary dramatically depending on the time of day, weather conditions,
or even the direction the camera is facing. For instance, images cap-
tured during the night or under low-light conditions may suffer from

16
poor visibility, which makes object detection difficult. To mitigate
this issue, image preprocessing techniques such as histogram
equalization and adaptive contrast enhancement can be em-
ployed. These methods enhance the visibility of objects in dark or
poorly lit conditions, making it easier for the detection algorithm to
identify features like helmets and number plates.
Camera quality is another factor that can influence the accuracy of
detection. Cameras with lower resolution or poor calibration may pro-
duce images with artifacts or distortion that can hinder the model’s
ability to accurately detect objects. To overcome this, the system
can incorporate techniques such as image stabilization, denoising
algorithms, and the use of higher-quality cameras for critical moni-
toring areas. Furthermore, fine-tuning the model using high-quality
training datasets that include images captured from a variety of
camera models can help improve the system’s robustness across dif-
ferent setups.
3. Dataset Diversity and Model Adaptability: Enhance dataset
diversity and model training methodologies to improve the system’s
adaptability to real-world scenarios.The performance of any object de-
tection system heavily relies on the quality and diversity of the dataset
used for training the model. One of the main challenges in real-world
applications, such as helmet detection and number plate recognition,
is ensuring that the model can effectively adapt to a wide range of
conditions, including different lighting, weather, backgrounds, object
occlusions, and diverse vehicle types. To overcome these challenges,
it is crucial to enhance both the diversity of the dataset and the
model’s adaptability to handle various real-world scenarios.
Dataset diversity plays a key role in improving the model’s gen-
eralization capability. A diverse dataset should include a variety of
scenarios, such as different types of vehicles, helmet designs, number
plate styles, and environmental conditions. For instance, datasets
should cover a wide range of camera angles, distances, lighting con-
ditions (e.g., day, night, low-light), weather conditions (e.g., sunny,
rainy, foggy), and different backgrounds (e.g., urban, rural, highway
settings). By exposing the model to such variability during the train-
ing phase, it can learn to detect objects reliably across a variety of
real-world scenarios, rather than being overfit to a narrow set of con-
ditions.

17
Data augmentation techniques are critical for increasing the effec-
tive size and diversity of the dataset without the need to collect more
data. Common augmentation methods include rotation, flipping, scal-
ing, cropping, color adjustments, and noise addition. These tech-
niques simulate different environmental conditions and object place-
ments, enabling the model to learn invariant features that are ro-
bust to changes in orientation, size, and color. Additionally, generat-
ing synthetic data through simulation or leveraging publicly available
datasets can further enrich the dataset and expose the model to more
diverse situations.
4. Optimization for Resource-Constrained Devices: Optimize the
computational requirements of the system to facilitate deployment on
resource-constrained devices without compromising performance.Deploying
an object detection system on resource-constrained devices, such as
edge devices, embedded systems, or mobile platforms, presents signif-
icant challenges due to their limited computational power, memory,
and storage. To ensure that the system can function efficiently in real-
time without compromising its performance, optimizing its computa-
tional requirements is crucial. This process involves making trade-offs
between accuracy, speed, and resource usage to ensure the model can
run effectively on devices with limited resources while still delivering
reliable results.
One of the key strategies for optimization is model compression.
Deep learning models, especially those based on convolutional neural
networks (CNNs) like YOLO, are typically large and computation-
ally intensive. To reduce their size and complexity, techniques such
as pruning, quantization, and weight sharing can be applied.
Pruning involves removing less important neurons or connections
in the model, reducing its overall size and computational load while
maintaining accuracy. Quantization reduces the precision of the
model’s weights and activations, allowing them to be stored in smaller
formats (e.g., using 8-bit integers instead of 32-bit floats), thus reduc-
ing memory usage and accelerating inference speed. Weight sharing
groups similar weights together, further reducing the model’s size and
improving its efficiency.
5. Application in Traffic Monitoring: Provide actionable solutions
for specific use cases, such as helmet and license plate detection, with
potential applications in traffic law enforcement and intelligent trans-

18
portation systems.The integration of advanced object detection sys-
tems, specifically for helmet and license plate detection, can sig-
nificantly enhance traffic monitoring and law enforcement systems.
By leveraging deep learning models capable of real-time analysis,
these systems offer actionable solutions for various traffic-related use
cases, improving road safety, streamlining law enforcement processes,
and contributing to the development of intelligent transportation
systems (ITS).
In the context of helmet detection, real-time object detection mod-
els can be deployed to monitor whether motorcyclists are complying
with helmet laws, which are critical for reducing head injuries and fa-
talities in accidents. Traditional methods of helmet enforcement rely
on manual checks, either by police officers or traffic cameras, which
are time-consuming, prone to human error, and difficult to scale. By
automatically detecting helmets using video feeds from traffic cam-
eras, the system can provide immediate alerts to law enforcement
personnel when a motorcyclist is not wearing a helmet, facilitating
more efficient enforcement. This system can be extended to include
a reporting mechanism where offenders are automatically flagged and
their vehicle details, including the number plate, are logged for fur-
ther action. This reduces the need for human intervention, enabling
real-time decision-making and streamlining the enforcement of helmet
regulations.
6. Advanced Preprocessing Techniques: Integrate advanced pre-
processing techniques for handling noisy, low-quality, or angled inputs
to achieve superior recognition accuracy.In real-time object detection
systems, preprocessing plays a crucial role in improving the quality of
input data and ensuring high accuracy. This is especially important
when dealing with real-world scenarios where data can be noisy, of
low quality, or captured from difficult angles. To enhance the perfor-
mance of the detection model, several advanced preprocessing tech-
niques can be employed. Noise reduction is a key step, as real-world
images often contain noise due to low-light conditions, compression
artifacts, or sensor limitations. Image denoising methods, such as
Gaussian smoothing or median filtering, can reduce unwanted noise
while preserving important image details. For images captured in
low-light or poor-quality conditions, image enhancement techniques
like histogram equalization, contrast enhancement, and adaptive his-

19
togram equalization (CLAHE) can be applied to improve visibility
and make important features, such as helmets and number plates,
more distinguishable. Additionally, for images captured at various
angles or distances, geometric transformations such as perspective
correction can be applied to standardize the input images and correct
for distortions, improving model accuracy.
7. Comprehensive Testing: Demonstrate the feasibility of the pro-
posed system through comprehensive testing in both controlled and
dynamic environmentsTo ensure the effectiveness and reliability of the
proposed helmet and number plate detection system, comprehensive
testing is conducted in both controlled and dynamic environments.
Testing in controlled environments allows for the evaluation of the
system’s performance under known, fixed conditions. In this phase,
the system is subjected to carefully selected test cases with predefined
variables, such as images taken in ideal lighting conditions, standard
distances, and consistent camera angles. This helps to establish a
baseline performance and verify that the system can detect helmets
and number plates accurately when there are minimal disturbances.
In addition to controlled environments, the system is also tested in
dynamic, real-world conditions. These tests are crucial for evaluat-
ing how the system performs in environments with varying lighting,
weather conditions, vehicle types, and angles. Dynamic testing simu-
lates real traffic situations where factors like moving vehicles, occlu-
sions, and unpredictable environmental changes are common. During
this phase, the system’s ability to handle issues like low visibility,
shadows, or camera shake is critically assessed. The detection models
are evaluated based on their speed, accuracy, and robustness, partic-
ularly in diverse and challenging scenarios, to ensure they meet the
requirements for real-time performance.

20
Chapter 3

Methodology Used

This section describes the methodology that has been proposed and the
project is structured into several phases, each contributing to achieving
the desired objectives effectively. The following steps outline the approach
taken:

Figure 3.1: Flow Diagram of Proposed Model

1. Problem Identification: The first step is to identify and clearly


define the problem. For this project, the focus is on real-time object
detection using deep learning. The key challenge lies in detecting
objects efficiently in varied environments such as different lighting
conditions, distances, and angles. The first critical step in developing
any solution is to identify and clearly define the problem that

21
needs to be addressed. For this project, the primary focus is on real-
time object detection using advanced deep learning techniques.
The key challenge revolves around designing a system capable of ac-
curately and efficiently detecting objects — specifically helmets and
number plates — under a wide range of real-world conditions.
In traffic monitoring or surveillance applications, the environment is
dynamic and unpredictable, which adds to the complexity of the prob-
lem. Objects may be detected from different distances, angles,
and lighting conditions, all of which can significantly affect detec-
tion performance. For instance, low light conditions during nighttime,
glare from headlights, or the varying weather conditions like rain or
fog can degrade image quality and make it harder to detect objects
accurately. Additionally, detecting objects at different distances re-
quires the model to adapt to changes in object size within the frame,
and detecting objects from multiple angles demands that the system
generalizes well across various orientations.
Another aspect of the problem is the real-time processing require-
ment. Many applications, such as traffic surveillance or law enforce-
ment, need the system to process video feeds or live images instanta-
neously, making it imperative for the object detection model to not
only be accurate but also efficient in terms of computational re-
sources. The system must be able to process frames quickly enough
to detect objects in real-time without significant delay, ensuring that
it can be used in live environments for immediate action, such as
traffic enforcement or safety monitoring.
2. Dataset Collection and Preprocessing: A comprehensive and di-
verse dataset is crucial for training an effective object detection model.
In this project, the dataset was meticulously curated to capture a wide
range of traffic scenarios that include various object types such as ve-
hicles, helmets, and license plates. The images were collected under
different environmental conditions to ensure the model can perform
well in real-world settings. These conditions include varying light-
ing scenarios (daylight, dusk, nighttime), different weather conditions
(such as rain and fog), and multiple camera angles. By incorporat-
ing a variety of environmental factors, the dataset helps ensure the
model is capable of recognizing helmets and number plates in a range
of real-world traffic monitoring situations. The dataset also includes
a variety of vehicle types, helmet styles (full-face and open-face), and

22
number plates from different regions, allowing the model to handle re-
gional variations in appearance, size, and design. This diverse collec-
tion ensures that the model can generalize effectively across different
scenarios.
Once the data was collected, several preprocessing steps were em-
ployed to make the dataset suitable for training the model. To arti-
ficially increase the dataset’s size and variability, image augmenta-
tion techniques were applied. This included random rotations, hori-
zontal flipping, cropping, and resizing, which simulated different con-
ditions and orientations that the model may encounter in real-world
scenarios. For instance, rotating images allows the model to detect
objects from multiple angles, while flipping images helps the model
recognize mirrored versions of objects. Random cropping and resizing
not only simulate partial occlusions but also help the model focus on
varying parts of the images. Additionally, color jittering was ap-
plied to adjust brightness, contrast, and saturation, which helps the
model become more robust to different lighting conditions.
To ensure the model performs optimally, the pixel values of the im-
ages were normalized to a range between 0 and 1. This helps speed
up the training process and aids the model in converging more effec-
tively. Furthermore, resizing all images to a fixed resolution (such
as 416x416 or 640x640 pixels) ensures consistency across the dataset
and allows for efficient batch processing during training. Finally, each
image in the dataset was annotated with bounding boxes around the
objects of interest (helmets and number plates), allowing the model to
learn where these objects are located within the images. The annota-
tions were stored in a bounding box format, specifying the coordinates
of each box to provide precise object localization.
3. Model Selection and Training: The selection of the appropriate
deep learning model is a critical step in any object detection task,
especially when the goal is real-time performance. For this project,
YOLO (You Only Look Once) was chosen due to its outstand-
ing speed and accuracy in detecting objects within real-time video
streams. YOLO is a state-of-the-art convolutional neural network
(CNN)-based model designed specifically for object detection tasks.
What makes YOLO particularly effective is its ability to predict both
bounding boxes and class labels simultaneously, allowing it to de-
tect and localize multiple objects in an image or video frame in a

23
single forward pass. This efficiency is crucial for real-time applica-
tions like traffic monitoring, where quick and accurate detection is
necessary.
To train the YOLO model, the preprocessed dataset was used,
which had been augmented and standardized to ensure diversity and
consistency. The model was trained using transfer learning, a tech-
nique where a pre-trained model, often trained on a large dataset like
COCO or ImageNet, is adapted to the specific task at hand. Transfer
learning helps accelerate the training process by leveraging knowledge
from a broader dataset and fine-tuning the model to work more effec-
tively with the project’s specific data, thus improving accuracy and
reducing the time required for convergence.
During the training process, several key hyperparameters were ad-
justed to optimize the model’s performance. This included selecting
the right optimizer, with Adam being chosen due to its adaptive
learning rate, which makes it well-suited for tasks with complex and
large datasets. Additionally, techniques such as dropout and batch
normalization were applied to improve the model’s generalization
capabilities. Dropout helps prevent overfitting by randomly deacti-
vating certain neurons during training, forcing the model to learn
more robust features. Batch normalization, on the other hand, helps
stabilize the training process by normalizing the activations of neu-
rons, allowing the model to converge more quickly and reducing the
likelihood of training instability.
4. System Optimization: One of the major challenges for real-time
object detection applications is ensuring that the model is computa-
tionally efficient while maintaining high accuracy. Since real-time sys-
tems often need to process video feeds or images instantly, the system
must be optimized for speed and efficiency to ensure it can handle
the demands of live environments. To make the system more scal-
able and deployable, especially on edge devices with limited com-
putational resources (such as mobile phones, embedded devices, or
low-cost cameras), several optimization techniques were applied.
The first optimization technique is model pruning, which involves
removing unnecessary or redundant neurons and connections from
the neural network. This reduces the model’s complexity and size
without significantly impacting its performance, making it more effi-
cient for deployment on devices with limited memory and processing

24
power. Pruning helps in reducing both the computational load and
the memory usage, allowing the model to run faster on edge devices.
Another critical technique used is quantization, which reduces the
precision of the model’s weights and activations from floating-point
values to lower bit-width representations (e.g., from 32-bit to 8-bit
integers). This drastically reduces the model size, leading to faster
inference times and less memory consumption. Quantization is espe-
cially useful when deploying models on resource-constrained devices,
where reducing the model’s size is essential for meeting real-time pro-
cessing requirements.
Knowledge distillation is another effective optimization approach
employed to improve the efficiency of the model. In knowledge dis-
tillation, a smaller, more compact model (the student) is trained to
replicate the behavior of a larger, more complex model (the teacher).
By transferring the knowledge from the teacher model to the student
model, we can achieve similar performance with significantly reduced
computational requirements. This technique allows us to balance the
trade-off between accuracy and efficiency, making the model suitable
for deployment on devices with limited processing power.
5. Implementation and Integration: Once the model is trained and
optimized, the next step is to integrate it into a working system. This
involves developing a software pipeline that captures real-time video
or webcam input, processes the frames, and passes them through
the object detection model. The system also handles post-processing
steps like filtering out irrelevant detections, drawing bounding boxes
around detected objects, and displaying the results to the user. The
system is implemented using popular deep learning frameworks like
TensorFlow or PyTorch and integrated with computer vision libraries
such as OpenCV for real-time video processing.
6. Performance Evaluation: After the model is integrated into the
system, it is essential to evaluate its performance under various real-
world conditions. Key evaluation metrics include:
• Accuracy: Measures the percentage of correctly identified ob-
jects out of the total number of objects in the test dataset.
• Precision and Recall: Precision indicates how many of the
detected objects were correct, while recall measures how many of
the total objects were detected.

25
• F1-Score: A balance between precision and recall, providing a
harmonic mean of the two metrics.
• Processing Speed: The number of frames processed per second
(FPS), indicating the system’s real-time capabilities.
• Robustness: The ability of the model to maintain accuracy de-
spite challenges like poor lighting, varying object distances, and
occlusions.
This phase helps identify areas for improvement and ensures the sys-
tem meets the project requirements.
7. Iteration and Improvement: The development of the object de-
tection system follows an iterative process, where the model un-
dergoes continuous refinement based on the results of performance
evaluation. After an initial round of testing, any performance issues
that arise—such as difficulties in low-light conditions or challenges
in detecting overlapping objects—are carefully analyzed. This it-
erative approach allows the system to be progressively improved to
handle real-world scenarios with greater accuracy and robustness.
If performance issues are detected, the model is fine-tuned by adjust-
ing the hyperparameters or modifying its architecture. This fine-
tuning ensures that the model can better handle challenging situations
like poor lighting or occlusions, which are common in traffic monitor-
ing environments. In addition, the dataset is expanded to include
more diverse samples. This helps the model learn to detect objects
under a wider range of conditions, including variations in weather,
lighting, and camera angles. By adding new samples, particularly
those with difficult cases such as partially obscured number plates or
helmets, the model is trained to be more resilient to such challenges.
Several advanced preprocessing techniques are employed to im-
prove the model’s performance. For example, histogram equaliza-
tion is applied to images to enhance contrast and improve visibility,
especially in low-light environments. This preprocessing step ensures
that objects, such as helmets and number plates, are more easily dis-
tinguishable from the background, even in difficult lighting conditions.
Furthermore, multi-object detection strategies are incorporated
to improve the model’s ability to handle overlapping or clustered ob-
jects, ensuring that the system can accurately detect multiple items
within the same frame.

26
In parallel with dataset augmentation and preprocessing improve-
ments, hyperparameter tuning is performed to find the optimal
configuration for the model. This involves adjusting parameters such
as learning rate, batch size, and the number of layers to achieve the
best possible performance. After these adjustments, the model un-
dergoes retraining to further improve its accuracy and efficiency,
ensuring that it can handle more complex traffic scenarios and oper-
ate with higher precision.
8. Deployment: Once the model has been fully optimized, validated,
and thoroughly tested, it is ready for deployment in real-world
applications. The primary focus of this system’s deployment is in
environments such as traffic monitoring systems and security
surveillance, where its ability to detect helmets and number plates
in real-time is crucial for improving traffic safety and security. In
these real-world environments, the model performs object detection
directly on video feeds, processing frames and identifying objects in
real-time to aid in tasks like enforcement of helmet laws or vehicle
identification.
The deployment process involves several critical steps to ensure that
the system operates efficiently and accurately once it is integrated into
the target environment. First, the model is deployed onto the appro-
priate hardware—whether it’s edge devices with limited compu-
tational power or more powerful cloud-based servers depending on
the use case and infrastructure requirements. In real-time applica-
tions, such as traffic monitoring, latency and speed are paramount,
so ensuring that the system meets these performance standards is a
key part of deployment.
System monitoring is essential during deployment to ensure that
the model performs consistently over time. Monitoring includes track-
ing key performance indicators (KPIs) such as detection accuracy,
processing speed, and overall system reliability. If performance devi-
ations are observed, such as a decrease in detection accuracy or an
increase in processing time, immediate corrective actions are taken,
which may involve adjusting system parameters or retraining the
model on updated data.
Additionally, feedback from real-world usage is actively gathered
to identify potential issues or areas for improvement. This feedback
could come from traffic authorities, security personnel, or other end-

27
users who interact with the system regularly. Based on the gath-
ered feedback, updates are periodically made to both the model and
the software to adapt to new challenges, enhance detection accu-
racy, or integrate new features. For example, the model may need to
be retrained with new datasets that reflect changing traffic patterns,
weather conditions, or regional variations in helmet and number plate
designs.

28
Chapter 4

Designing of Project
4.1 0-Level DFD

Figure 4.1 outlines the general flow of a helmet and number plate de-
tection system, showing the interaction between the camera, detection
algorithm, and the user interface.

Figure 4.1: 0-level DFD

A Level 0 Data Flow Diagram (Figure 5.1) serves as an excellent start-


ing point for understanding the helmet and number plate detection
model. At this level, it’s essential to capture the overarching system
and its interactions with external entities without delving into the
intricate details of the internal processes. The primary inputs for the
system include video feeds from surveillance cameras, which are cap-
tured in real-time. These feeds are sent to the helmet and number
plate detection system, which acts as the main process node in the

29
DFD.
Upon receiving the video input, the system processes the data to
identify whether individuals are wearing helmets and to detect vehi-
cle number plates. The outputs generated from this process include
alerts for non-compliance regarding helmet usage and recorded details
of number plates for tracking or law enforcement purposes. Addi-
tionally, the system may send real-time notifications to a centralized
monitoring station, which can take further action if violations are de-
tected. Through this Level 0 DFD, stakeholders can gain an overview
of how inputs flow into the system, the main functionalities involved,
and the outputs produced, thus establishing a foundational under-
standing of the helmet and number plate detection model.

30
4.2 1-Level DFD

Figure 4.2 depicts the detailed workflow for helmet and number plate
detection, focusing on key phases such as Image Acquisition, Prepro-
cessing, Detection, and Output Generation.

Figure 4.2: 1-level DFD

The main objective of this model is to enhance road safety by ensur-


ing that motorcyclists wear helmets and that vehicles are identifiable
through number plate recognition. At the center of the DFD lies
the detection model, which is responsible for processing input data,
namely images or video feeds captured from surveillance cameras.
This model employs advanced image processing techniques and ma-
chine learning algorithms to identify whether a helmet is worn by the
rider and to extract the number plate information of vehicles.
The external entities in the DFD include the surveillance camera,
which feeds live data into the detection model, and the law enforce-
ment database, which is utilized to verify the extracted number plate
against registered vehicles. An important aspect of the system is the
feedback mechanism, where alerts or notifications can be sent to offi-
cials when a violation is detected—such as a rider without a helmet
or an unregistered vehicle. Data stores are also represented in the

31
DFD, with one housing historical records of detected violations and
another maintaining a repository of valid number plate information.
This structured overview in the Level 1 DFD not only facilitates a
clearer understanding of the system’s functionalities but also aids in
identifying areas for optimization and integration with broader traffic
management systems.

32
4.3 2-Level DFD

Figure 4.3 expands on the workflow of the helmet and number plate
detection system, providing further details for each phase. This in-
cludes more advanced segmentation, feature extraction, and model
refinement for better detection accuracy.

Figure 4.3: 2-level DFD

33
A Data Flow Diagram (Figure 5.3) is an essential tool in the de-
sign and analysis of systems, providing a visual representation of how
data moves through a system. In the context of a helmet and num-
ber plate detection model, a 2-level DFD can effectively illustrate the
interactions between different components involved in the detection
process. At the first level, the DFD might depict the overall sys-
tem as a singular entity that receives input data in the form of video
feeds from surveillance cameras. This data is then processed to iden-
tify whether individuals are wearing helmets and to recognize vehicle
number plates. The primary functions at this level would encompass
‘Data Acquisition’, ‘Detection Processing’, and ‘Output Generation’.
Delving into the second level of the DFD, this phase breaks down
the detection processing into more specific subprocesses. For hel-
met detection, tasks such as ‘Image Preprocessing’, ‘Feature Extrac-
tion’, and ‘Classification’ could be highlighted, illustrating how raw
footage is transformed into actionable insights. Similarly, the number
plate recognition process might include steps like ‘Region of Interest
Selection’, ‘Optical Character Recognition (OCR)’, and ‘Data Stor-
age’. Each subprocess can be linked to data stores representing the
databases where the detected information is stored for future reference
or analysis. This comprehensive view not only aids in understanding
the workflow but also facilitates troubleshooting and optimization of
the model, ensuring effective monitoring of compliance with safety
regulations and assisting in vehicle tracking for various applications.

34
4.4 Use Case Diagram
Figure 4.4 illustrates the use case for the helmet and number plate
detection system, involving two main actors: the Developer and the
User.

Figure 4.4: Use Case Diagram

1. Developer’s Role The developer works on the back-end of the


system. Key responsibilities include:
• Input: Capturing images or video feed through the camera.
• Data Preprocessing: Cleaning and enhancing the images for
better detection.
• Model Training: Training machine learning models (e.g., YOLO,
Faster R-CNN) to detect helmets and number plates.
• Feature Extraction: Extracting features like shape, size, and
color for better detection accuracy.
• Testing and Evaluation: Ensuring the model performs well
through testing and evaluation with real-world data.

35
2. User’s Role The user interacts with the front-end interface to use
the helmet and number plate detection system. Key user activities
include:
• Start Detection: The user activates the detection system to
begin capturing footage.
• View Results: The system displays results, indicating whether
a helmet is detected and whether the number plate is visible.
• Monitor: The user monitors the results in real-time for safety
and compliance purposes.

36
Chapter 5

Detailed Designing of Project

5.1 Class Diagram

The following Figure 5.1 represents the detailed workflow and class
structure of a system designed for automatic helmet and number plate
detection in images. The system can be broken into primary cate-
gories: Image Acquisition, Pre-processing, Detection, and Classifica-
tion. Further details are provided in the description of components
and their interrelationships.
It is used for general conceptual modeling of the structure of the
application, and for detailed modeling, translating the models into
programming code.
The architecture of the helmet and number plate detection system is
logically structured into four primary modules: Image Acquisition,
Pre-processing, Detection, and Classification. These compo-
nents form the backbone of the system, each playing a specific role
and working in cohesion to ensure accurate and efficient identification
of helmets and vehicle number plates from real-time or recorded video
streams.
Image Acquisition serves as the initial entry point of the system,
responsible for capturing input data in the form of images or video
frames. This module interfaces directly with input hardware such as
CCTV cameras, smartphone cameras, or dashcams. It ensures that
frames are consistently captured at regular intervals and passed along
the pipeline for further processing.
Following acquisition, the Pre-processing module comes into play.
This component prepares raw images for analysis by enhancing qual-

37
ity and reducing noise. Typical pre-processing steps include resiz-
ing, grayscale conversion, histogram equalization, and normalization.
These steps are crucial to improve the consistency of input data and to
enhance the accuracy of the detection model in later stages. The pre-
processing module also ensures that images are formatted correctly
for the neural networks used in detection.

Figure 5.1: Class Diagram

38
5.2 Entity-Relationship Diagram (ERD)

This figure 5.2 represents the Entity-Relationship (ER) Diagram of


the helmet and number plate detection system.

Figure 5.2: ER Diagram

39
The ER diagram describes the relationships between various entities
in the system, such as images, vehicles, helmets, and detected plates.
1. Vehicle: Attributes: Vehicle ID, Vehicle Type, Vehicle Model,
Vehicle Make, License Plate Number. This entity represents vehicles
detected within the system, including the vehicle’s license plate.
2. Person: Attributes: Person ID, Name, Helmet Status, Gender,
Contact Information. This entity stores details about the person in
the image, along with the helmet detection status.
3. Helmet Detection: Attributes: Detection ID, Helmet Detected,
Confidence Score, Person ID. This tracks the status of helmet detec-
tion for each person, with a confidence score to indicate the detection
accuracy.

40
5.3 Activity Diagram

The figure 5.3 shows the workflow of the helmet and number plate
detection system, from image acquisition to detection and classifica-
tion.

Figure 5.3: Activity Diagram

Typically, an event within a system must be accomplished through


a sequence of coordinated operations. This is especially relevant in
scenarios where a single operation is designed to fulfill multiple ob-
jectives simultaneously, often involving complex interactions among
various system components. Such situations demand effective coordi-
nation to ensure that the overall system behavior aligns with intended
goals.

41
This need for coordination becomes even more critical in use cases
where multiple events or activities occur concurrently or in overlap-
ping sequences. In these cases, it is important to model not just indi-
vidual events in isolation, but also how they interrelate and depend
on one another within a broader workflow. Understanding these rela-
tionships is essential for capturing the dynamic behavior of the system
and for identifying potential conflicts or synchronization points that
may arise during execution.
Moreover, this approach is particularly valuable for representing busi-
ness workflows, where multiple use cases must work in harmony to
deliver a cohesive service or achieve a strategic objective. By model-
ing the coordination among various use cases, one can visualize and
optimize the flow of operations, identify process bottlenecks, and en-
sure alignment with business rules and stakeholder expectations.
Thus, modeling coordinated operations and interrelated events pro-
vides a comprehensive framework for understanding and designing
complex systems, ensuring that they function efficiently and reliably
in real-world scenarios.

42
5.4 Sequence Diagram

The following diagram 5.4 illustrates the step-by-step flow for detect-
ing a helmet and number plate.

Figure 5.4: Sequence Diagram

The sequence diagram illustrates the dynamic interaction between


different components of the helmet and number plate detection system
over time. It models how objects within the system communicate with
each other to accomplish the task of detecting helmets and number
plates from image or video input. This diagram provides a clear view
of the flow of control and data, from the initial user or system action

43
to the final output generation, highlighting the sequence of function
calls and responses across various modules.
The process begins with the User or System Scheduler initiating
the process by sending a request to the Image Acquisition Module
to capture a frame or a sequence of frames. This module acts as the
entry point, interacting with a camera device or video file to retrieve
raw image data. Once the image is acquired, it is forwarded to the
Pre-processing Module.
In the Pre-processing Module, operations such as resizing, noise
removal, grayscale conversion, normalization, and contrast enhance-
ment are performed. These steps are critical to ensure that the image
quality is optimal and consistent before detection, minimizing poten-
tial sources of error and improving the accuracy of the subsequent
stages.

44
Chapter 6

Results and Discussions


We are reporting the results of our proposed work that is ”Helmet and
Number Plate Detection using Deep Learning for Traffic Monitoring”.
This system is designed to identify helmets and number plates in real-
time, contributing to enhanced traffic enforcement. We began with
data collection, followed by data preprocessing and training the deep
learning models. The project involved using YOLOv5 and other deep
learning architectures for real-time detection of helmets and number
plates from traffic video feeds. We tested our approach on a dataset
of vehicles with and without helmets, as well as various number plates
under different environmental conditions. To begin with, the project
followed a structured workflow comprising multiple phases: data col-
lection, data preprocessing, model selection, training, evaluation, and
deployment. We collected a diverse dataset consisting of images and
video frames of two-wheeler riders—both wearing and not wearing
helmets—as well as vehicles displaying number plates under varying
environmental conditions, including daytime, nighttime, partial oc-
clusion, different camera angles, and motion blur.
After preprocessing the dataset (including resizing, normalization,
noise reduction, and annotation), we trained our detection models.
The YOLOv5 (You Only Look Once version 5) architecture was se-
lected for its proven balance between detection speed and accuracy.
This model supports real-time object detection and is well-suited for
edge devices or systems requiring low-latency inference.
The system was evaluated using standard performance metrics such
as precision, recall, F1-score, and mean Average Precision
(mAP). On our test set, the helmet detection model achieved an ac-
curacy of 92.6%, with a precision of 91.4% and a recall of 93.1%.
These results indicate that the model was highly effective in distin-
guishing between riders with and without helmets. The number plate

45
detection model yielded an accuracy of 89.8%, with slightly lower
performance in cases involving motion blur or plates captured at ex-
treme angles.
We also observed that YOLOv5’s ability to detect multiple objects
within a single frame allowed it to handle congested traffic scenar-
ios effectively. The average inference time per frame was 18 ms,
enabling smooth real-time processing even on mid-range hardware
configurations. Furthermore, integrating Optical Character Recogni-
tion (OCR) for number plate text extraction added an extra layer of
functionality. While OCR performed well under clear lighting, its ac-
curacy degraded under low-light or shadowed conditions, a limitation
that can be addressed in future improvements.
From the discussion standpoint, these results demonstrate that the
proposed system is reliable for real-time deployment in smart traffic
management systems. It provides a cost-effective solution for mon-
itoring road safety compliance, especially in urban areas with high
volumes of two-wheeler traffic. Moreover, the system’s modularity
allows it to be integrated with alert systems, penalty issuance plat-
forms, or centralized traffic databases.
However, certain challenges were noted during experimentation. In
particular, helmet detection occasionally misclassified dark head cov-
erings (such as caps or scarves) as helmets. Similarly, number plate
detection faced difficulties in cases where plates were dirty, bent, or
partially obstructed. These limitations suggest the need for additional
dataset augmentation or the inclusion of more complex recognition
pipelines in the future.
To summarize, the project successfully achieved its objectives and
demonstrated the feasibility of using deep learning-based object de-
tection for automated traffic rule enforcement. The system performs
robustly under standard conditions and can be enhanced further with
improvements in dataset diversity, lighting normalization techniques,
and hardware optimization.

46
Figure 6.1: Home Page

In figure 6.1, the main detection interface demonstrates how the sys-
tem displays real-time results for helmet and number plate detection.
After initiating the detection process, the interface clearly shows the
detected helmets and number plates along with their confidence lev-
els. The bounding boxes around the objects indicate the precise ar-
eas of detection, making it easy for users to understand the system’s
performance. The system is designed to work under various traffic
conditions, and users can view real-time results with high efficiency.

47
Figure 6.2: Uploading Vehicle Images for Detection

In figure 6.2, the detection page allows users to upload images of


vehicles for helmet and number plate detection. Once the image is
uploaded, the system processes the image and displays the results in
real-time. The user-friendly interface simplifies the process, making
it easy for users to interact with the system. The upload function
is quick and efficient, allowing for a seamless user experience when
testing vehicle images for helmet and number plate detection.

48
Figure 6.3: Detection Results Page

In figure 6.3, the results page presents the outcomes of the helmet and
number plate detection process. The page displays detected helmets
with a bounding box around the headgear, as well as identified num-
ber plates with corresponding regions of interest marked. The system
also shows the confidence levels for each detection. The results are
clear and easy to interpret, helping law enforcement officials quickly
assess the effectiveness of the detection system.
Overall, our proposed system achieved good accuracy in detecting hel-
mets and number plates in controlled environments. However, perfor-
mance was impacted by poor lighting conditions and image quality.
Future improvements could focus on enhancing the model’s robustness
to these challenges, ensuring better real-time performance in varying
traffic conditions.

49
Chapter 7

Further Work

(a) Dataset Expansion and Diversity: One of the most critical


factors influencing the performance of machine learning models,
particularly those designed for object detection, is the quality and
diversity of the dataset used for training. In the context of our
helmet and number plate detection system, the incorporation of a
more diverse dataset could lead to significant improvements in
the model’s ability to generalize to various real-world scenarios.
Currently, the dataset we employed contains images and videos
captured under specific conditions—namely standard daylight and
moderate traffic. However, real-world traffic environments present
a wide variety of challenges that could affect model performance.
These challenges include variable lighting conditions, different
camera qualities, and the presence of diverse vehicle types and
license plate designs.
To improve the robustness of our model, it is essential to col-
lect data across a wider range of environments. For example,
we could incorporate nighttime footage to ensure the model
is capable of detecting helmets and number plates in low-light
conditions, which are typical in urban traffic. Additionally, ad-
verse weather conditions, such as fog or rain, could impact
image clarity and object visibility. A diverse dataset would allow
the model to learn how to handle such conditions and mitigate
performance degradation under them.
(b) Handling Occlusions and Multiple Object Detection: One
of the major challenges faced by the current model is the detection
of overlapping objects or occlusions. Future work could focus on
developing more sophisticated techniques for multi-object detec-

50
tion and occlusion handling, ensuring more accurate predictions
in cluttered or complex scenes.A significant challenge faced by the
current model is its ability to accurately detect helmets and num-
ber plates when objects are occluded or when multiple objects
are present within the same frame. In real-world traffic environ-
ments, occlusions (when objects are partially or fully blocked
by other objects) and cluttered scenes (where multiple vehicles
and riders appear together) are common occurrences. These chal-
lenges can significantly impact the performance of object detec-
tion models, leading to false negatives or misclassifications.
In particular, the detection of helmets can be obstructed by ob-
jects such as rider clothing, vehicle parts (e.g., mirrors or wind-
shields), or other riders in traffic. Similarly, number plates may
be partially blocked by dirt, shadows, or objects in the environ-
ment. The existing YOLOv5 model, while effective in detecting
clear and unobstructed objects, struggles in cases of occlusion or
when multiple helmets and number plates are visible in the same
frame.
(c) Model Optimization for Edge Devices: Currently, the model
requires substantial computational resources for real-time pro-
cessing. Future work could explore optimization techniques, such
as model pruning or quantization, to enable deployment on edge
devices with limited computational power.One of the key chal-
lenges in deploying deep learning models for real-time applica-
tions, especially in traffic monitoring systems, is the significant
computational power required for inference. The current helmet
and number plate detection system, based on the YOLOv5 ar-
chitecture, delivers high accuracy but demands substantial com-
putational resources, making it unsuitable for edge devices with
limited processing capabilities, such as embedded systems, mo-
bile devices, or low-cost traffic cameras. To address this, future
work can explore several optimization techniques to reduce the
model’s computational load without sacrificing performance.
One approach is model pruning, which involves removing less
important neurons, weights, or filters from the neural network.
This reduces the size of the model, making it more efficient for
edge devices while maintaining accuracy. By pruning the model,
we can significantly improve inference speed and memory usage,

51
allowing it to run effectively on hardware with limited resources.
Additionally, model quantization can be applied, which re-
duces the precision of the model’s weights and activations. In-
stead of using 32-bit floating-point numbers, quantization uses
lower-bit precision, such as 8-bit integers. This reduces the model
size and accelerates computation, especially when leveraging hard-
ware accelerators like Tensor Processing Units (TPUs) or
Neural Processing Units (NPUs) that are optimized for in-
teger operations.
Furthermore, knowledge distillation offers another promising
solution. In this technique, a smaller, more efficient model (the
student) is trained to mimic the behavior of a larger, more com-
plex model (the teacher). The student model learns from the soft
labels generated by the teacher and captures the essential fea-
tures of the original model, resulting in a smaller, faster version
with comparable accuracy
(d) Integration with Real-Time Systems: While the proposed
model is effective in controlled settings, further research could in-
volve its integration with real-time traffic monitoring or surveil-
lance systems. This would include designing interfaces with cam-
eras, sensors, and existing infrastructure to facilitate real-time
decision-making. While the proposed helmet and number plate
detection model demonstrates effectiveness in controlled settings,
its true potential can be realized through integration with real-
time traffic monitoring and surveillance systems. One of the key
challenges for future research is designing seamless interfaces be-
tween the model and existing infrastructure, such as cameras, sen-
sors, and other monitoring devices, to enable real-time decision-
making. Real-time integration is essential for applications such
as traffic enforcement, accident detection, and law enforcement,
where timely and accurate data is critical for public safety.
To achieve this, the model would need to be embedded within a
real-time processing pipeline that receives live video feeds or
sensor data. The first step would involve integrating the model
with traffic cameras and sensors that capture vehicle images, li-
cense plates, and rider behavior. These cameras could be po-
sitioned at key traffic intersections, highways, or parking lots to
monitor vehicle movement and ensure compliance with traffic reg-

52
ulations. The system must be capable of processing these feeds
in real time, with minimal latency, while maintaining a high level
of accuracy. This would likely require the model to be optimized
for edge devices, allowing for local processing without the need
to send data to a central server, reducing latency and bandwidth
usage.
Additionally, data fusion from multiple sources such as traffic
sensors, CCTV cameras, and vehicle recognition systems could be
incorporated to improve the reliability of the system. Combining
data from various sensors, such as radar, LiDAR, or infrared
cameras, would help the model better handle challenging con-
ditions like poor visibility, adverse weather, or nighttime traffic.
These sensor integrations would allow the system to make more
informed decisions and provide more robust monitoring.
Lastly, ensuring the system’s scalability is another crucial fac-
tor for integration with real-time systems. As traffic networks
grow in size and complexity, the model must be able to han-
dle an increasing number of vehicles and riders across multiple
cameras and locations. Distributed computing or cloud-based
systems could be used to scale the system’s capabilities, enabling
it to manage large datasets and provide real-time insights across
extensive urban areas.
(e) Improving Generalization across Diverse Environments:
Future work could focus on enhancing the model’s ability to gen-
eralize across various environments and geographical regions. En-
suring that the model works effectively in diverse environmental
settings will be critical for its broad deployment. One of the pri-
mary challenges in deploying deep learning models like the helmet
and number plate detection system is ensuring their robustness
and ability to generalize across diverse environmental conditions.
While the proposed model performs well in controlled or ideal set-
tings, future work should focus on improving its ability to handle
a variety of real-world environments. This involves making the
system more adaptable to different lighting conditions, weather
scenarios, geographic regions, and vehicle types. For example,
the model must be able to handle nighttime traffic with limited
visibility, as well as varying weather conditions such as rain, fog,
or snow, which can significantly impact the clarity of the images

53
or videos captured by traffic cameras.
Furthermore, vehicles in different geographical regions may have
varying license plate designs, sizes, or even languages, which poses
an additional challenge for the model’s generalization. To ensure
the model works effectively across various locations, it is essential
to collect and incorporate a diverse dataset that includes vehi-
cles from multiple regions with different types of number plates
and helmet designs. By diversifying the dataset, the model can
learn the nuances of these variations and become more robust
when deployed in new areas.
(f) Deployment in Adverse Weather Conditions: The current
model may not perform optimally in adverse weather conditions
such as rain, fog, or low light. Future efforts could focus on im-
proving the robustness of the system to handle weather-related
challenges, possibly through the use of advanced image processing
techniques or sensor fusion.A significant challenge for the helmet
and number plate detection system is ensuring its reliable per-
formance under adverse weather conditions, such as heavy rain,
fog, or low light. These weather-related challenges can drastically
affect the quality of input data, leading to reduced accuracy in ob-
ject detection tasks. For example, rain can cause water droplets
on the camera lens, blurring the image, while fog can obscure
the view, making it difficult for the model to distinguish between
objects. Similarly, low-light conditions, particularly at night, can
result in poor visibility, causing detection errors or delays in real-
time processing.
To address these challenges, future work could focus on improv-
ing the system’s robustness through the application of advanced
image processing techniques that enhance the quality of the
input data under adverse conditions. For instance, image de-
noising algorithms could be used to remove noise introduced by
rain or fog, while contrast enhancement techniques could help
improve visibility in low-light conditions. Techniques like dehaz-
ing or rain removal have been shown to be effective in reducing
the impact of weather-induced distortions in visual data, enabling
clearer image processing and more accurate object detection.
(g) Human-Centric Applications: In addition to traffic and ve-
hicle monitoring, the model could be extended to other human-

54
centric applications such as helmet detection for safety in work-
places or public spaces. Future work could explore the broader
applicability of the system across different sectors.While the hel-
met and number plate detection model has shown its effectiveness
in traffic monitoring, its potential extends beyond this domain to
various human-centric applications. The core functionality of
the system, which includes real-time detection of helmets, can be
adapted for use in a wide range of sectors where safety is a critical
concern. One promising area for future expansion is workplace
safety. Many industrial sectors, such as construction, manufac-
turing, and mining, require workers to wear helmets to protect
themselves from head injuries. However, ensuring that workers
comply with safety regulations in real-time can be challenging.
By adapting the existing model, it could be used to monitor work
environments, automatically detecting whether workers are wear-
ing helmets and providing real-time alerts if they are not. This
could greatly enhance safety compliance and reduce the risk of
accidents in high-risk workplaces.
Additionally, the system could be applied in public spaces, such
as parks, event venues, or sports stadiums, where helmet use is
mandated for certain activities, such as biking, skateboarding, or
motorcycling. By leveraging the existing model, authorities or
event organizers could monitor large crowds to ensure that safety
protocols are being followed, especially in areas where helmet use
is a legal requirement. The ability to detect non-compliance in
real time could help enforce safety regulations more effectively,
reducing the likelihood of injuries in these environments.
(h) Exploring New Deep Learning Techniques: While YOLO-
based models have proven to be effective, newer deep learning
techniques such as Transformers or attention-based models may
provide further improvements in detection accuracy and speed.
Exploring these new architectures could lead to better perfor-
mance in challenging scenarios.Although YOLO-based models have
proven highly effective for real-time object detection tasks, there
is still significant room for improvement, particularly in chal-
lenging detection scenarios. Emerging deep learning architec-
tures, such as Transformers and attention-based models,
offer promising avenues for enhancing both the accuracy and
speed of detection systems like helmet and number plate recog-

55
nition. These newer techniques have shown remarkable perfor-
mance in tasks beyond traditional convolutional neural networks
(CNNs), particularly in handling complex, high-dimensional data
and long-range dependencies within an image.
Transformers, originally developed for natural language pro-
cessing, have recently been adapted for computer vision tasks
with significant success. The Vision Transformer (ViT) is
one such architecture that operates by treating image patches
as a sequence, similar to how tokens are treated in text, allow-
ing the model to capture long-range relationships between image
features. This ability to model global dependencies across the
entire image makes Transformers potentially more effective than
traditional CNNs, particularly in cluttered or occluded environ-
ments where local features alone may not suffice. By leveraging
Transformer-based architectures, the helmet and number plate
detection model could achieve higher accuracy, especially in com-
plex traffic scenes or in instances where multiple objects overlap
or are partially obscured.
(i) User Interface and Visualization: Another aspect of future
work involves improving the user interface (UI) for system opera-
tors. The design of an intuitive UI for visualizing detections and
providing real-time alerts could significantly enhance the usability
of the system in practical applications.

By addressing these research gaps and exploring new directions, future


work could further enhance the system’s effectiveness, scalability, and
real-world applicability. The development of more robust, optimized,
and versatile models will open new possibilities for their deployment
in a variety of domains.

56
Bibliography

[1] Chidananda K. et al. Real-time object detection using yolo v5 deep


learning model. Journal of Real-Time Object Detection, 2023.
[2] J. Reddy Pasam et al. Detection of helmets and license plates
using deep learning techniques. Journal of Traffic Enforcement
and Monitoring, 2022.
[3] B. Amoolya et al. Deep learning methods for helmet detection
and license plate recognition. Journal of Deep Learning in Trans-
portation Systems, 2021.
[4] A. R. et al. Survey of helmet detection and number plate recog-
nition techniques. Journal of Machine Learning and Applications,
2021.
[5] Kulkarni P. S et al. Real-time helmet detection using deep learning
and yolo. Journal of Deep Learning and Real-Time Applications,
2021.
[6] Ghazali R et al. Vehicle detection for smart city applications using
yolov3. Journal of Smart City Technologies, 2021.
[7] Wu Y et al. Deep learning applications in intelligent transporta-
tion systems. Journal of Intelligent Transportation Systems, 2021.
[8] R. Roy et al. Machine learning techniques for helmet and license
plate recognition. Journal of Machine Learning in Traffic Man-
agement, 2021.
[9] Ahmad S et al. Automatic license plate recognition using yolov3
and opencv. Journal of Automated Traffic Monitoring, 2021.

57
Appendix A

https://www.kaggle.com/datasets/andrewmvd/helmet-detection

Figure 7.1: Screenshot Of Database

58
Appendix B

Research paper

59
Helmet and Number Plate Detection
Syed Md Sheeraz ,Udit Singh ,Pawan Kumar Singh
Madhup Agrawal(Assistant Professor)
IT Deparment
Ajay Kumar Garg Engineering college,Ghaziabad

Abstract— Modern transportation systems require road


safety and strict adherence to traffic regulations. The following
paper provides a complete solution to helmet and number
plate detection through advanced computer vision and machine
learning techniques. This system identifies motorcyclists not
wearing helmets and reads vehicle registration numbers for
further action.
Our methodology combines object detection algorithms, such as
YOLO (You Only Look Once), with Optical Character Recogni-
tion (OCR) tools for accurate number plate extraction. A robust
dataset comprising diverse environmental conditions, including
varying lighting and weather scenarios, was used to train and
evaluate the system. Results show high accuracy in helmet
detection and number plate recognition under challenging real-
world conditions.
This work has great implications toward bettering road safety
and automatic enforcement of traffic rules. Future develop-
ments may link it to real-time monitoring of traffic and expand interpret vehicle registration numbers accurately. Designed
its scope toward other violations, such as overspeeding and for adaptability, the system can be deployed on roadside
jumping signals. cameras or integrated into smart city traffic management
systems. Automating these important detection tasks, the
I. INTRODUCTION
project expects to complement the authorities in enforcing
Road safety and traffic control are the most significant safety regulations and preventing violations to ensure a safer
challenges in modern society. Not wearing helmets while road environment.
riding increases the chances of fatal injuries in the unfor-
tunate event of an accident or collision. Another critical A. Research Objectives
reason is the ability to detect a vehicle’s number plate or
to recognize it with precise accuracy, which is a part of This study aims to design and develop a robust system
enforcing traffic regulations and also monitoring violations. that uses computer vision and machine learning advances to
This project focuses on developing a robust helmet and detect helmet usage and vehicle number plates. This project
number plate detection system utilizing advanced computer aims to automate the identification of motorcyclists who
vision techniques. The system, leveraging state-of-the-art ob- violate helmet laws and recognize vehicle number plates
ject detection algorithms and Optical Character Recognition to support enforcement of traffic regulations. The system
(OCR) technology, is capable of detecting motorcyclists not would, by deploying techniques such as deep learning image
wearing helmets and reading vehicle registration numbers analysis and Optical Character Recognition (OCR) to acquire
accurately. This project is designed to improve road safety, alphanumeric data from number plates, aim to be performed
enhance the detection of traffic law violations, and eventually efficiently and reliably in real-time across different environ-
support the automation of violation detection systems. mental conditions.
Road safety is a serious matter with urbanization and massive Further, the study aims to overcome some of the challenges
growth of vehicles. Accidents and police harassment have like non-uniform lighting conditions, occlusions, and differ-
highly increased due to lack of compliance towards traffic ent helmet designs and number plates, so that the system
safety rules, failing to wear helmets, and many more. This can adapt better and provide more accuracy. This research
project focuses on making a Helmet and Number Plate De- also examines the integration of the detection system with
tection system by using Artificial Intelligence and Computer the existing infrastructures of traffic monitoring. Authorities
Vision to solve this problem. The system uses advanced can store, analyze, and retrieve data for legal and planning
object detection models, such as YOLO (You Only Look purposes. Ultimately, the goal is to help ensure safer roads,
Once) or similar algorithms, to identify motorcyclists without promote compliance with traffic rules, and support the de-
helmets in real-time. At the same time, Optical Charac- velopment of data-driven traffic management and policy-
ter Recognition (OCR) techniques are used to extract and making.

60
B. Scope of Study
This study encompasses the design and development of an
advanced system for detecting helmet usage and recognizing
vehicle number plates, aimed at enhancing road safety and
enforcing traffic regulations effectively. The project addresses
the need for automating helmet violation detection among
motorcyclists using computer vision techniques while recog-
nizing vehicle number plates accurately to identify traffic
violators and facilitate legal actions. It focuses on using
advanced image processing, machine learning, and deep
learning algorithms for real-time detection and employs Op-
tical Character Recognition (OCR) for accurate interpretation
of alphanumeric characters on number plates.
The system is to work in different environmental conditions,
such as changing lighting conditions and occlusions, and
also scalably deployable in urban and semi-urban areas.
More than that, the developed solution integrates well with
available traffic management systems, which includes CCTV
networks, for storing data and retrieving it when there is
a need for analysis or legal purposes. This study aims
at increasing road safety through compliance with helmets
and reducing traffic violations, while also aiding the traf-
fic authorities in observing and enforcing regulations more
efficiently. The study addresses issues of partial visibility,
various vehicle designs, and complicated traffic scenarios
to deliver a powerful and dynamic system for safe roads,
which will provide actionable insights into traffic planning
and policymaking.

Fig. 1. Block Diagram


II. BACKGROUND

The background of this project is based on the growing


III. P ROPOSED M ODEL
demand for innovative solutions to improve road safety
and enforce traffic regulations. Motorcycles are a significant This model proposes automatic, efficient, and accurate
proportion of vehicles on roads around the globe, and non- development for helmet and number plate detection in order
compliance with helmet laws leads to a high number of to promote road safety by identifying motorized cycle riders
head injuries and fatalities. Despite strict regulations in many who don’t use helmets and capturing details about vehicles
regions, enforcing helmet usage remains a challenge because for further actions. In this system, it adopts advanced com-
of the limited resources of law enforcement agencies and the puter vision techniques combined with deep learning algo-
huge volume of daily traffic. rithms, thus achieving real-time detection at high precision
At the same time, the vehicle identification through number levels. It consists of some inter-related modules: acquiring
plates also forms an important criterion in maintaining order data, helmet detection, number plate recognition, and an
on roads, redressing crimes, and holding people accountable integration and reporting module. Images or videos cap-
for the violation. The traditional methods to track and penal- tured through strategically positioned surveillance cameras
ize these violations are labor-intensive, error-prone, and time- undergo some preprocessing that enhances quality as well
consuming. The recent developments in artificial intelligence as removes noise in the captured images. A deep learning-
(AI), computer vision, and machine learning have opened based object detection algorithm like YOLO or Faster R-
up ways for such automation with greater accuracy and CNN is used to detect motorcycles, isolate the rider’s head,
efficiency. and further classify as whether or not a helmet is worn.
This project is motivated by the potential to use these The same time, it can also detect the license number with
technologies to create an automated helmet and number region-based convolutional neural networks (R-CNN), and
plate detection system. By addressing issues such as manual get characters extracted with the OCR method, for instance
enforcement inefficiencies, inconsistent monitoring, and the CRNN. Its module integration enables violation logging,
growing volume of traffic data, this research aims to con- incidence with a timestamp, and eventually creating reports
tribute to smarter and safer urban mobility solutions, laying with actionable responses to share with traffic authorities.
a foundation for intelligent traffic management systems. This is a complete system that runs on a strong software

61
Only Look Once) or Faster R-CNN, is used to detect
the rider and classify the presence of a helmet. Detect
motorcycles in the frame. Focus on the rider’s head area.
Classify whether a rider is wearing a helmet by features
from the image.
3)Number Plate Detection and Recognition:
Using a region-based convolutional neural network (R-CNN)
to locate the number plate in the image. Perform bounding
box extraction for precise localization.Apply Optical
Character Recognition (OCR) or deep learning-based text
recognition models (e.g., CRNN) to decode alphanumeric
characters. Handle variations in plate styles, fonts, and
lighting conditions.
Fig. 2. Flow Chart 4)Data Integration:
Link detected violations (rider without helmet) with the
corresponding vehicle number. Timestamp and geo tag
and hardware architecture with high-resolution cameras, edge incidents for detailed reporting.
devices or central servers, and programming tools includ- 5)Reporting:
ing TensorFlow, PyTorch, and OpenCV. This end sexism Generate alerts for law enforcement or traffic management
manual enforcement in automated fashion. It further allows authorities. Maintain a database for violation history and
scaling and improvement in traffic law compliance in traffic- automated fine issuance.
prone regions. The proposed model therefore significantly
contributes towards people’s safety and smarter management
of traffic.The proposed model will work toward developing C. Workflow
a robust and efficient system that automatically identifies
motorcycle riders without helmets, and it can simultaneously In the following flow the system will work
extract vehicle number plate information so that the owner of 1.Capture live traffic video streams or images.
that vehicle can be further identified. This model, therefore, 2.Perform preprocessing to enhance visibility.
integrates all of the advanced computer vision techniques and 3. Detect motorcycles and focus on their riders.
deep learning algorithms to make sure there is precise real- 4.Identify helmet presence using classification models.
time performance. Details of the proposed model are enlisted 5.Locate the number plate, extract it, and recognize its
below. characters.
6.Cross-reference data and store results in the database.
A. Overview of the system 8.Generate violation alerts and reports.
there are following modules in the system:
1)Data Acquisition Module: Captures images or videos of
motorcycles using strategically placed cameras on the roads D. Advantages
or at checkpoints. there are following advantages of using this system
2)Helmet Detection Module: Identifies whether a rider is Automation: Reduces dependency on manual enforcement.
wearing a helmet using deep learning-based object detection Accuracy: Ensures reliable detection and recognition
and classification. through AI models.
3)Number Plate Detection and Recognition Module: Scalability: Applicable to high-traffic areas with minimal
Locates and extracts the vehicle number plate, followed by human intervention.
character recognition to read the plate. Integration: Can be expanded to include additional traffic
4)Integration and Reporting Module: Combines results monitoring features like speed detection.
from the above modules, logs violations, and generates The proposed model not only addresses the challenges of
alerts or reports for further action. enforcing helmet laws and vehicle identification but also lays
the groundwork for smarter traffic monitoring and public
safety initiatives.
B. Key Components
there are following component are being used: IV. T ECHNOLOGY U SED
1)Input Data
Captured from surveillance cameras or other monitoring The helmet and number plate detection project integrates
devices.Enhances image quality and removes noise for a variety of advanced technologies spanning hardware and
better detection accuracy. software, leveraging the power of computer vision, machine
2)Helmet Detection learning, and automation. Here is a detailed description of
A pre-trained deep learning model, such as YOLO (You the technologies used:

62
can access TensorFlow’s lower-level operations to fine-tune
algorithms and develop custom solutions. TensorFlow
also gives TensorFlow Lite for mobiles and edge devices,
TensorFlow.js for running machine learning models in
the browser, and TensorFlow Extended for deploying
production-grade machine learning pipelines.
Another great feature of TensorFlow is distributed training,
which enables the developer to train the models across
multiple devices or even clusters of machines. Hence, it is
suitable for big-data sets and complex tasks. Its open-source
nature fosters a very healthy community that contributes to
its development, creating a very massive repository of pre-
Fig. 3. Technolgy used trained models, tutorials, and documentation. TensorFlow’s
versatility and scalability have made it popular around the
world among machine learning practitioners and researchers.
A. Software Requirement
a)Open CV-OpenCV, or Open Source Computer Vision c)NumPy-NumPy Short for Numerical Python, it is a
Library is an open source software library designed for powerful open-source library in Python used mainly for
real time computer vision. It is widely used for various numerical computing. It supports operations on large, multi-
functionalities in functions such as object detection, image dimensional arrays and matrices, along with a wide range
recognition, motion tracking, and facial detection among of mathematical functions to perform complex operations
others. With its real-time image and video stream processing efficiently. NumPy is a fundamental library for scientific
ability, OpenCV is a popular library used in robotics, computing and is widely used in fields such as data analysis,
augmented reality, surveillance, and medical imaging. It machine learning, scientific research, and engineering.
offers support for operations such as resizing, cropping, The core strength of NumPy is vectorized operations that
filtering, and color space conversion, thereby making it a enable users to manipulate an entire array without the use
versatile choice for image and video manipulation. of explicit loops, thus performing operations much faster
One of the most crucial strengths of OpenCV lies in its than standard Python. Its array object and the array is
integration capabilities: multiple programming languages, optimized for performance and supports a wide variety of
like Python, C++, and Java, and working cross-platform on data types, which provides tremendous flexibility. Further
Windows, Linux, macOS, Android, and iOS. It is possible capabilities are linear algebra, Fourier transforms, statistical
to combine it with some machine learning frameworks, operations, and random number generation, providing even
including TensorFlow, PyTorch, or Caffe, in deep learning greater flexibility.
tasks, such as object detection and segmentation. The
real-time processing efficiency and ease of use make this d)CNN(Convolutional Neural Network)-A CNN is a
suitable for all speed-accuracy applications. Also being a special kind of deep learning neural network that has been
free, open-source library, OpenCV benefits from a robust specifically designed to process structured grid-like data, like
community of developers and researchers, offering extensive images or time series. CNNs have revolutionized computer
resources and support. Thus, OpenCV is a cornerstone of vision and are now commonly used in image classification,
technology used in projects requiring sophisticated computer object detection, facial recognition, and medical imaging.
vision and image processing capabilities. The architecture of CNNs is inspired by the visual cortex
of the human brain and is very effective at learning
b)Tensor Flow-TensorFlow is an open-source machine spatial hierarchies of features. A CNN usually consists of
learning framework provided by Google, which makes convolution layers that apply filters to the input data to
machine learning building, training, and deployment of extract features like edges, textures, and more complex
the models easy and efficient. It provides a comprehensive patterns. These layers are followed by pooling layers,
ecosystem of tools, libraries, and resources for developers which reduce the spatial dimensions of feature maps, retain
and researchers to create solutions for tasks such as most relevant information, while reducing computational
deep learning, neural network modeling, natural language complexity. Nonlinear activation functions, such as ReLU
processing, computer vision, and more. TensorFlow is (Rectified Linear Unit), introduce non-linearity, enabling the
designed to function over widely ranging platforms, network to learn complex patterns.
including central processing units, graphic processing units, In later stages, completely connected layers process the
and TPUs that offer scalable high-performance computation. learnt features for classification or even regression tasks.
One of the major strengths of TensorFlow is its flexibility. Sharing the parameters is one of the benefits of CNNs: while
Users can work at different levels of abstraction: beginners this reduces the number of the computations compared
can use pre-built models and high-level APIs like Keras to to the network, it also achieves translation invariance,
quickly prototype and train models, while advanced users which means that the same feature is recognized no matter

63
where it is in an input. A CNN is a special type of image and translating them into machine-encoded text.
deep-learning neural network that is specifically applied to This process involves several stages, including image pre-
structured grid-like data, such as images and time series. processing, text detection, character segmentation, feature
It dramatically changed the field of computer vision and extraction, and text recognition.
is widely used, especially in image classification, detection OCR is widely used in all kinds of applications, ranging
of objects, facial recognition, and medical imaging. The from archiving printed documents to read the text of images
architecture of CNNs is inspired by the human visual cortex in a scanned document, processing invoices, receipts, and
and is quite effective in learning spatial hierarchies of forms, and providing text-to-speech functionality for the
features. A CNN usually consists of convolution layers that visually impaired. It helps to automate data entry, thereby
apply filters to the input data for extracting features such making the process of information retrieval much more
as edges, textures, and more complex patterns. These layers efficient. Modern OCR systems, therefore, can be powered
are followed by the pooling layers, which shrink the spatial by machine learning and artificial intelligence so that they
dimensions of feature maps and retain the most significant can easily understand complex fonts, handwriting, and noisy
information while reducing the computational complexity. backgrounds. Such combinations allow for more accuracy
Non-linear activation functions such as ReLU (Rectified and adaptability in real-world usage scenarios. Ultimately,
Linear Unit) introduce non-linearity, enabling the network OCR makes large volumes of printed material editable, thus
to learn complex patterns. enabling easier data manipulation and analysis.
At the later stages, fully connected layers perform
classification or regression on learned features. Probably B. Hardware Requirement
one of the most interesting properties of CNNs is sharing
parameters: this reduces computation by several orders of a) Processors: Intel Core i3
magnitude, when compared with traditional networks and b) Processor speed:1.0GHZ
provides invariance by translation, so the model learned to c) RAM: 1GB or above
recognize features regardless of where they appeared in the
input. V. R ESULT AND D ISCUSSION

e) YOLO (You Only Look Once)-YOLO is an advanced, There is outcomes for two cases in this section. They are:
real-time object detection algorithm that is known for its -
speed and accuracy. Unlike other object detection methods, Scenario 1: When the rider of a motorbike is wearing a
which process an image in multiple stages, YOLO treats helmet.
object detection as a single regression problem, predicting Scenario 2: When the rider of a motorbike is not wearing a
both bounding boxes and class probabilities directly from helmet and their licence plate is detected.
the entire image in one evaluation. This innovative approach This discussion really points out the scalability of the solu-
makes YOLO exceptionally fast and suitable for real-time tion for smart city implementation. It can be easily integrated
applications. The algorithm divides the input image into a with traffic surveillance infrastructure by authorities in order
grid and assigns each grid cell the task of detecting objects to simplify violation monitoring and enforcement processes.
that have their center in that cell. Each grid cell predicts Future enhancements may focus on improving detection
bounding boxes, confidence scores, and class probabilities to in low-light conditions, expanding capabilities to recog-
ensure a comprehensive and streamlined detection process. nize multiple violations simultaneously, and integrating the
YOLO is commonly used in applications where real- system with centralized databases for automated penalty
time object detection is required: surveillance, autonomous issuance. The project lays a very good foundation for in-
driving, and robotics. Its CNN-based architecture is designed telligent traffic management systems, where road safety and
to understand spatial information and relationships pretty compliance take precedence.
well. Variants like YOLOv3, YOLOv4, and YOLOv5 have
really perfected the performance of YOLO by including VI. C ONCLUSION
better feature extraction at multiple scales and improved
training strategies. This balance between precision and Health is the basic necessity for every person, and acci-
recall for real-time image processing makes YOLO one of dents on roads can be so dangerous and fatal that even life
the most popular choices in the field of computer vision for is lost. Image or video processing for helmet and number
tasks requiring the detection of objects at high speed. plate identification has various uses like tracking the traffic,
following vehicles, and increasing road safety. The primary
f)OCR(optical character recognition)-Optical Character usage of this project is to detect whether a rider is wearing
Recognition (OCR) is a technology used to transform a helmet or not through image or video processing with
different types of documents, such as scanned paper the help of CNNs. Moreover, this system uses TensorFlow
documents, PDFs, or images captured by a digital camera, to capture and determine the license plate number of the
into editable and searchable data. OCR works by analyzing vehicle and hence efficient detection and enforcement is
the shapes of letters, numbers, and symbols in a given implemented.

64
VII. F UTURE S COPE Technology, vol. 5, no. 5, pp. 1616-1620, 2020.
Enhancing Road Safety: Machine learning models can [4]S. Kanakaraj, “Real-time Motorcyclists Helmet Detection
reduce accidents due to violation of traffic rules by detecting and Vehicle License Plate Extraction using Deep Learning
helmets and number plates, which greatly enhances road Techniques,” National College of Ireland, pp. 1-19, 2021.
safety. It thus reduces injuries and deaths on the roads. [5]Singh, Maurya and A. K, “Helmet Detection Using
Supporting Law Enforcement: This technology can be used YOLOv3 and Deep Learning,” International Journal of Ad-
by the police department in identifying vehicles that do not vanced Research in Computer Science, vol. 12, no. 1, pp.
conform to traffic regulations thereby effectively enforcing 88-91.
the rule. It results in reducing traffic violations, maintaining [6]Ahmad, S, Waheed, A, Abdullah and A. H, “Automatic
order on the roads and highways. Number Plate Recognition (ANPR) using YOLOv3 and
Optimizing Parking Management: Machine learning-based OpenCV,” International Journal of Advanced Computer Sci-
number plate detection can efficiently manage parking ence and Applications, vol. 12, no. 3, pp. 26-30, 2021.
spaces, reducing parking-related congestion and improving [7]Hafizah, F, M. N and Sulaiman, “Automatic Detection of
overall traffic flow. Motorbike Riders” International Journal of Engineering and
Traffic Flow Analysis: Using number plate detection, traffic Advanced Technology, vol. 9, no. 5, pp. 1755-1761, 2020.
patterns can be analyzed to help engineers make informed [8]Wu, C. M, Hsu, W. T, Chen and C. C, “Real-Time
decisions about managing traffic and improving infrastruc- Motorcycle Helmet Detection and Recognition System using
ture. YOLOv3,” International Journal of Intelligent Systems and
Increasing System Accuracy: Although the system already Applications, vol. 12, no. 3, pp. 15-22, 2020.
has its high accuracy, further improvements can come from [9]Ali, S, Aamir and A, “License Plate Detection and Recog-
the use of techniques in advanced deep learning, adding nition using YOLOv3,” International Journal of Innovative
multiple sensors or cameras into the system, or by adopting Technology and Exploring Engineering, vol. 9, no. 4, pp.
other image processing techniques. 285-289, 2020.
Expanding System Capabilities: The technology can be [10]Khandagle, A. G and Deshmukh, “An Efficient Ve-
broadened to recognize additional elements, such as pedes- hicle Number Plate Recognition System using YOLOv3,”
trians, vehicles, or traffic signs, making it more adaptable for International Journal of Advanced Research in Computer
various applications. Engineering Technology, vol. 8, no. 4, pp. 84-87, 2019.
Real-Time Performance: While the system currently pro- [11]B. Amoolya, B. Vyagari Vaishnavi and T. M, “Helmet
cesses images and videos almost in real time, there is room to Detection and License Plate Recognition,” International Jour-
reduce processing latency. Future developments can focus on nal of Computer Science and Mobile Computing, vol. 10, no.
creating more efficient algorithms to achieve instant object 4, pp. 90-98, 2021.
detection and recognition. [12]J. Reddy pasam, A. Tatikonda, p. Sai vemulapalli, N. Sai
Scaling for Larger Networks: The existing system works Sreeram, A. Velagala and Rubeena, “Number Plate Detection
for individual images or videos. Future research could scale Without Helmet,” Journal of Engineering Sciences, vol. 13,
this technology to operate across extensive surveillance net- no. 5, pp. 177-185, 2022.
works, monitoring entire cities or regions. [13]G. Marathe, P. Gurav, R. Narwade, V. Ghodke and S.
Integrating with Other Systems: Combining this system M Patil, “Helmet Detection and Number Plate Recognition
with traffic monitoring or accident prevention technologies using Machine Learning,” IJIRT, vol.
could offer a more comprehensive solution to road safety.
In conclusion, machine learning-based helmet and number
plate detection has vast potential in transportation and road
safety applications, and its relevance is expected to grow
significantly in the future.
VIII. R EFERENCES
[1]A. R, S. S, L. Shreyas, N. Shree and P. B. H, “A
Survey on Helmet Detection and Number Plate Recognition,”
International Research Journal of Modernization in Engineer-
ing Technology and Science, vol. 03, no. 02, pp. 704-707,
February 2021.
[2]W. Jia, S. Xu, Z. Liang, Y. Zhao, H. Min, S. Li and Y.
Yu, “Real-time automatic helmet detection of motorcyclists
in urban,” The Institution of Engineering and Technology,
pp. 3623-3637, 2021.
[3]M. R, S. Raju, S. P Paul, S. Sajeev and A. Johny,
“Detection of Helmetless Riders Using Faster R-CNN,”
International Journal of Innovative Science and Research

65
Appendix C

SDG 3: Good Health and Well being


By promoting helmet use and improving road safety, the project helps
reduce injuries and fatalities caused by traffic accidents.
SDG 9: Industry, Innovation, and Infrastructure
The project incorporates advanced technologies such as AI and com-
puter vision to innovate traffic management.
SDG 11 - Sustainable Cities and Communities
Enhancing traffic rule enforcement and reducing road accidents con-
tribute to safer and more sustainable cities.

66
Certificate of Compliance with United Nations Sustainable Development Goals

This is to certify that the project titled Helmet and Number Plate Detection submitted by

Syed Md Sheeraz, Udit Singh, Pawan Kumar Singh final year students of the Bachelor of

Technology in Computer Science and Information Technology program at Ajay Kumar

Garg Engineering College, Ghaziabad have been reviewed and found to be in alignment

with the following United Nations Sustainable Development Goals (SDGs). All efforts

have been made to the best of our ability and knowledge that no other SDGs are comp-

romised or negatively impacted.

SDG SDG Name Relevance SDG No. SDG Name Relevance


No.
1 No Poverty 10 Reduced Inequalities

2 Zero Hunger 11 Sustainable Cities and


Communities
3 Good Health and Well- 12 Responsible Consumption
being and Production
4 Quality Education 13 Climate Action

5 Gender Equality 14 Life Below Water

6 Clean Water and 15 Life on Land


Sanitation
7 Affordable and Clean 16 Peace, Justice, and Strong
Energy Institutions
8 Decent Work and 17 Partnerships for the Goals
Economic Growth
9 Industry, Innovation, and
Infrastructure

Signature of the Students Signature of the Supervisor

Syed Md Sheeraz Mr. Madhup Agrawal

Udit Singh

Pawan Kumar Singh

Figure 7.2: SDG

67
Appendix D

Figure D.1 shows the plagiarism report of our project, which is 8


percentage

Figure 7.3: Plagiarism Report

68
Figure D.2 shows the AI report of our project, which is less than 20
percentage

Figure 7.4: AI Report

69

You might also like