FINAL REV
FINAL REV
by
November, 2024
1|Page
A SMART TRAFFIC VOLUME MEASURMENT BASED ON
DEEP LEARNING
Submitted in partial fulfillment for the award of the degree of
By
SURYAKUMAR PRASANNA DURGA-(20MIS0195)
November, 2024
2|Page
DECLARATION
I further declare that the work reported in this thesis has not been submitted
and will not be submitted, either in part or in full, for the award of any other
degree or diploma in this institute or any other institute or university.
Place: Vellore
Date: Signature of the Candidate
S.PRASANNA DURGA (20MIS0195)
3|Page
CERTIFICATE
The contents of this report have not been submitted and will not be
submitted either in part or in full, for the award of any other degree or
diploma in this institute or any other institute or university. The Project
report fulfils the requirements and regulations of VELLORE INSTITUTE
OF TECHNOLOGY, VELLORE and in my opinion meets the necessary
standards for submission.
4|Page
ABSTRACT
The detection and monitoring of vehicles are vital for advancing intelligent transportation
management systems. This project aims to develop a robust vehicle detection and tracking
system using the AdaBoost algorithm applied to aerial images. By harnessing the latest
advancements in machine learning and deep learning, the system is designed to detect and track
multiple objects efficiently from captured video footage.
The process begins with analyzing traffic videos, where the system processes frames sequentially
to identify and monitor moving vehicles. The AdaBoost algorithm enhances the detection
capabilities by improving classification accuracy through a strong ensemble of weak classifiers.
Additionally, segmentation techniques are employed to distinguish targeted vehicles from
background elements, thereby increasing the effectiveness of detection.
This vehicle detection and tracking system serves multiple purposes, including assessing traffic
density, identifying different types of vehicles, and evaluating overall traffic flow conditions on
the road. Such functionality is crucial for urban planning, traffic management, and emergency
response strategies, enabling authorities to make informed decisions in real time. Ultimately, this
project aims to contribute significantly to the field of intelligent transportation systems,
enhancing road safety, reducing congestion, and improving the efficiency of traffic management
processes.
5|Page
ACKNOWLEDGEMENT
It is my pleasure to express with deep sense of gratitude to Dr. Srinivas Koppu,
Associate Professor Grade 2, School of Information Technology, Vellore Institute of
Technology, for his/her constant guidance, continual encouragement, understanding;
more than all, he taught me patience in my endeavor. My association with him is not
confined to academics only, but it is a great opportunity on my part of work with an
intellectual and expert in the field of AI.
Place: Vellore
Date: Signature of Candidate
Prasanna Durga (20MIS0195)
6|Page
TABLE OF CONTENTS
7|Page
CHAPTER 4
ANALYSIS & DESIGN
4.1 PROPOSED METHODOLOGY .................................................... 26-28
4.2 SYSTEM ARCHITECTURE ........................................................... 28
4.3 MODULE DESCRIPTIONS ......................................................29-30
CHAPTER 5
IMPLEMENTATION & TESTING
5.1 DATA SET ............................................................................................ 35
5.2 SAMPLE CODE ............................................................................... 36-47
5.3 SAMPLE OUTPUT ............................................................................ 48-49
5.4 TEST PLAN & DATA VERIFICATION ........................................ 50-51
CHAPTER 6
RESULTS
6.1 RESEARCH FINDINGS ................................................................... 52
6.2 RESULT ANALYIS & EVALUATION METRICS ........................ 53-55
CONCLUSIONS AND FUTURE WORK ................................................... 56
REFERENCES.......................................................................................... 57-58
APPENDICES ................................................................................................ 59-60
8|Page
LIST OF FIGURES
3.4 Gantt chart ................................................................................... 24
4.1 Workflow………………………………………………………….27
4.2 Architecture …………………………………………………….28
4.1 Diagram for all the algorithms .................................................... 32
9|Page
LIST OF TABLES
2.1 Literature Survey ....................................................................... 14-21
3.3 Budget table ................................................................................ 23
3.4 Gantt chart activities list ............................................................. 25
Appendices ....................................................................................... 59-60
10 | P a g e
LIST OF ACRONYMS
ML - Machine Learning
DL - Deep Learning
AdaBoost - Adaptive Boosting
UAV - Unmanned Aerial Vehicle
IOU- Intersection over Union
11 | P a g e
CHAPTER 1
INTRODUCTION
1.1BACKGROUND
Traffic congestion is an increasingly critical issue in urban environments, adversely
affecting not only the efficiency of transportation systems but also contributing to
environmental pollution and decreased quality of life for residents. As urban populations
grow and vehicle ownership increases, traditional traffic management strategies are
becoming inadequate. Historical methods of traffic monitoring often involve manual
observation or limited sensor networks, which fail to capture real-time data and trends
accurately. The integration of advanced technologies, particularly computer vision and
artificial intelligence (AI), offers new pathways to address these challenges. These
technologies enable automated vehicle detection and tracking, providing detailed insights
into traffic flow and behavior. By analyzing video footage captured from various locations,
such as intersections and highways, we can gain a comprehensive understanding of traffic
patterns, vehicle counts, and congestion points. Moreover, the potential to process this data
in real-time allows for more responsive traffic management, enabling authorities to
implement dynamic solutions to improve road safety and efficiency. This shift towards
automated traffic monitoring systems not only enhances operational effectiveness but also
supports the development of smart city initiatives.
1.2MOTIVATION
The motivation for this project stems from several key factors that highlight the pressing
need for improved traffic monitoring solutions. Current traffic monitoring systems often
face challenges in maintaining accuracy during adverse weather conditions and in complex
environments. There is a growing demand for real-time traffic analysis to support smart
city initiatives, as traditional fixed-camera systems frequently deliver incomplete data.
Additionally, existing systems struggle with issues related to vehicle occlusion and overlap,
and there is an increasing necessity for automated systems capable of classifying vehicle
types and analyzing traffic behavior patterns. By addressing these challenges, this project
aims to contribute to urban mobility improvements and reduce traffic congestion through
enhanced traffic volume analysis.
12 | P a g e
1.3PROJECT STATEMENT
The primary goal of this project is to develop a smart traffic volume measurement system
utilizing deep learning techniques that can accurately detect, track, and count vehicles in
real-time from video surveillance footage. This system aims to provide reliable traffic
density estimations and vehicle classifications while maintaining high performance across
varying environmental conditions. By employing advanced algorithms, the system will
facilitate better data collection and analysis, ultimately supporting more effective traffic
management strategies.
1.4OBJECTIVES
In alignment with the project's purpose, the main objectives include implementing
advanced vehicle detection algorithms using robust techniques such as AdaBoost,
improving traffic density estimation for real-time monitoring, and optimizing processing
efficiency to enable immediate analysis and decision-making. Additionally, the project will
focus on facilitating robust vehicle tracking across video frames, supporting multi-source
data integration for comprehensive traffic analysis, enhancing detection accuracy in diverse
conditions, and conducting continuous algorithm refinement to improve reliability and
performance.
13 | P a g e
CHAPTER 2
LITERATURE SURVEY
2. Mohammad Ali Amirabadi - "Deep Neural Network-Based QoT Estimation for SMF
and FMF Links" (2023): This study introduces a deep neural network (DNN)-based
regressor for estimating Guaranteed Signal-to-Noise Ratio (GSNR) in Single-Mode Fiber
(SMF) and Few-Mode Fiber (FMF) links. The proposed method leverages advanced data-
driven techniques to enhance accuracy while reducing computational complexity compared
to existing approaches. Extensive experiments demonstrate the DNN's superior
performance in predicting quality of transmission (QoT) metrics, making it a valuable tool
for network engineers. The findings suggest that the DNN can effectively support network
optimization efforts and improve overall link reliability in fiber-optic communications.
15 | P a g e
7. Bin Qu - "Optimizing Dynamic Cache Allocation in Vehicular Edge Networks: A
Method Combining Multisource Data Prediction and Deep Reinforcement Learning"
(2022): This research introduces an innovative approach to optimizing dynamic cache
allocation in vehicular edge networks by considering time-varying content popularity and
vehicle traffic patterns. The method combines multisource data prediction with deep
reinforcement learning to enhance content caching strategies. By improving the cache hit
rate and utility while minimizing replacement costs, the proposed solution contributes to
more efficient data management in vehicular environments. The findings demonstrate the
potential of integrating machine learning techniques into vehicular networks to improve
service quality and user satisfaction.
16 | P a g e
10. Faraz Malik Awan - "Using Noise Pollution Data for Traffic Prediction in Smart
Cities: Experiments Based on LSTM Recurrent Neural Networks" (2022): This paper
explores the use of noise pollution data combined with traffic time-series data to train Long
Short-Term Memory (LSTM) recurrent neural networks for traffic prediction in Madrid.
The proposed approach effectively improves prediction accuracy compared to traditional
methods by leveraging noise as an additional feature. The findings suggest that integrating
environmental data into traffic prediction models can enhance urban traffic management
strategies, ultimately leading to smarter and more efficient city infrastructure.
12. Grigorios Kakkavas - "Generative Deep Learning Techniques for Traffic Matrix
Estimation From Link Load Measurements" (2023): This paper explores the application
of deep generative models for estimating traffic matrices derived from link load
measurements in communication networks. Traffic matrices are critical for network
management as they represent the flow of data between different nodes, but estimating
them can be challenging due to high dimensionality and complexity. The proposed method
leverages generative models to transform the estimation task into a lower-dimensional
optimization problem, facilitating a more manageable and computationally efficient
approach.
17 | P a g e
13. Stefano Bilotta - "Short-Term Prediction of City Traffic Flow via Convolutional Deep
Learning" (2022): This research introduces the CONV-BI-LSTM architecture,
specifically designed for predicting short-term city traffic flow. By integrating
convolutional neural networks with bidirectional LSTM units, the proposed model
effectively captures spatial and temporal patterns in traffic data. The results demonstrate
that this architecture significantly outperforms existing solutions, providing accurate
predictions essential for urban planning and traffic management.
14. Ahsan Shabbir - "Smart City Traffic Management: Acoustic-Based Vehicle Detection
Using Stacking-Based Ensemble Deep Learning Approach" (2022): This paper
proposes a stacking ensemble deep learning technique for classifying emergency vehicle
sirens amidst background noises. The method achieves high accuracy in distinguishing
sirens from various sounds, enhancing emergency response times in smart city traffic
management. By utilizing multiple deep learning models, the proposed approach
effectively improves classification performance compared to traditional methods. The
findings highlight the potential of acoustic-based detection systems in improving public
safety and traffic management in urban environments.
15. Jinbiao Huo - "Quantify the Road Link Performance and Capacity Using Deep
Learning Models" (2022): This study introduces a deep learning framework for
quantifying road link performance and capacity in dynamic traffic scenarios. By combining
the Bureau of Public Roads (BPR) link performance function with neural network modules,
the proposed approach enables accurate estimation of traffic conditions. The findings
demonstrate that the framework effectively captures complex traffic dynamics, offering
valuable insights for traffic management and infrastructure planning.
18 | P a g e
S.NO TITLE SUMMARY
20 | P a g e
11 "Explainable Deep-Learning Proposes an explainable deep learning
Approaches for Packet-Level approach for predicting traffic from
Traffic Prediction of collaboration and communication mobile
Collaboration and apps, enhancing trust and transparency in
Communication Mobile resource allocation optimization in
Apps" (2023) communication networks through
explainable AI techniques.
12 "Generative Deep Learning Explores deep generative models for
Techniques for Traffic Matrix estimating traffic matrices from link load
Estimation From Link Load measurements, transforming the estimation
Measurements" (2023) task into a lower-dimensional optimization
problem to improve accuracy and
computational efficiency in network
management.
13 "Short-Term Prediction of Introduces the CONV-BI-LSTM architecture
City Traffic Flow via for predicting short-term city traffic flow by
Convolutional Deep capturing spatial and temporal patterns,
Learning" (2022) significantly outperforming existing
solutions and emphasizing the importance of
advanced deep learning techniques in urban
mobility.
14 "Smart City Traffic Proposes a stacking ensemble deep learning
Management: Acoustic-Based method for classifying emergency vehicle
Vehicle Detection Using sirens in noisy environments, improving
Stacking-Based Ensemble emergency response times in traffic
Deep Learning Approach" management through high classification
(2022) accuracy.
15 "Quantify the Road Link Introduces a deep learning framework to
Performance and Capacity quantify road link performance and capacity
Using Deep Learning in dynamic traffic scenarios, effectively
Models" (2022) capturing complex traffic dynamics and
providing insights for traffic management
and infrastructure planning.
21 | P a g e
2.2CHALLENGES PRESENT IN THE EXISTING SYSTEM
Current traffic monitoring systems encounter several significant challenges that hinder their
effectiveness. One major issue is the impact of adverse weather conditions, such as heavy
rain or fog, which can obscure camera views and compromise vehicle detection accuracy.
Additionally, complex urban environments introduce varying lighting conditions, leading
to false positives and missed detections.
Real-time processing of high-volume video data is another critical hurdle; many systems
struggle to analyze video feeds quickly enough for timely traffic management decisions,
often due to computational constraints. Vehicle occlusion and overlap in dense traffic
situations further complicate accurate counting and tracking, particularly in multi-lane
scenarios.
Moreover, the capability to classify vehicles accurately is often lacking in current systems.
Distinguishing between different types of vehicles, such as cars, trucks, buses, and
motorcycles, is essential for comprehensive traffic analysis, yet many technologies fall
short.
Moreover, existing systems frequently lack the capability to accurately classify vehicles,
which is essential for comprehensive traffic analysis. Finally, integrating data from various
sources—such as cameras and sensors—remains challenging, as many systems are not
equipped to handle diverse data streams effectively. Addressing these challenges is crucial
for enhancing traffic monitoring and supporting smart city initiatives.
22 | P a g e
CHAPTER 3
REQUIREMENTS
3.1HARDWARE REQUIREMENTS
➢ Processor: 1 Gigahertz (GHz) or faster
➢ RAM: Minimum 2 GB
➢ Hard Disk: Minimum 20 GB
➢ Graphics Card: DirectX 9 or later with WDDM 1.0 driver
➢ Display Resolution: 1920 x 1080
3.3BUDGET
Procured Items/Components for the Project work Total cost
Item1 NA
Item2 NA
Item3 NA
Total Budget (INR) NA
23 | P a g e
3.4GANTT CHART
The Gantt chart outlines the major tasks involved in the Traffic volume project and their
estimated months of work. The durations are calculated based on the estimated time
required to complete each task. Adjustments can be made as needed based on project
progress and any unforeseen delays or changes in requirements.
24 | P a g e
Activity Description of Activity Guide
Remarks
1. Define Project Establish specific goals for the traffic monitoring system. ok
Scope and
Objectives
2. Data Collection Gather and preprocess traffic video data for analysis. ok
and Preprocessing
3. Feature Extract essential features from video frames for vehicle ok
Extraction and detection.
Engineering
4. Model Develop machine learning models for vehicle detection and ok
Development tracking.
5. Training and Train models and evaluate their performance using relevant ok
Evaluation metrics.
6. Comparison with Compare developed models against existing systems for ok
Existing Models improvement.
7. Deployment and Deploy the system and document the implementation and user ok
Documentation guidelines
8. Final Evaluation Evaluate system performance and compile findings into a report. ok
and Reporting
9. Project Summarize project outcomes and lessons learned in a final ok
Conclusion and presentation.
Presentation
25 | P a g e
CHAPTER 4
4.1PROPOSED METHODOLOGY
The proposed methodology for the vehicle detection and tracking system is designed to
enhance real-time traffic monitoring through a series of systematic steps. The process
begins with capturing high-resolution video footage from traffic surveillance cameras or
drones, ensuring the quality of the input data is optimal for accurate analysis. Initially,
preprocessing techniques are employed, such as background subtraction, which isolates
moving vehicles from the static background. This is followed by noise reduction methods
to enhance the clarity of the images, improving detection accuracy.
For vehicle detection, connected component analysis is utilized, which identifies and
segments vehicles based on pixel connectivity. This method effectively differentiates
vehicles from other objects in the scene. To enhance classification accuracy, machine
learning algorithms, particularly the AdaBoost algorithm, are implemented. This algorithm
is trained on a labeled dataset to classify different types of vehicles, such as cars, trucks,
and motorcycles, enabling the system to provide detailed insights into traffic composition.
Once vehicles are detected, tracking is facilitated through techniques like Kalman filtering
or SORT (Simple Online and Real-time Tracking). These methods ensure continuous
tracking of vehicles across multiple frames, allowing the system to maintain accurate
positional information. Speed estimation is integrated into the system by calculating the
distance a vehicle travels between frames, generating real-time alerts for speeding incidents
based on predetermined thresholds.
Additionally, the system displays bounding boxes around detected vehicles and logs
movement patterns to analyze traffic density and flow. This data is invaluable for urban
planning and improving road safety. By providing real-time alerts for speeding vehicles
and unusual traffic conditions, the system facilitates timely interventions, contributing to
more efficient traffic management.
26 | P a g e
REAL-TIME TRAFFIC MONITORING AND ANALYSIS WORKFLOW
27 | P a g e
4.2VEHICLE DETECTION AND TRACKING SYSTEM ARCHITECTURE
28 | P a g e
4.3MODULES
▪ Image Acquisition: This module captures real-time images or video frames from traffic
cameras or other sources. High-resolution input frames are collected to ensure reliable
vehicle detection and tracking in later stages. The captured images serve as the primary
data input for all subsequent processing and analysis steps.
▪ Image Preprocessing: Preprocessing improves image quality by reducing noise, adjusting
brightness, and resizing. These steps ensure uniformity and enhance the features of vehicles
in the images, making them easier to detect. By optimizing image clarity, preprocessing
prepares the frames for accurate and consistent vehicle recognition.
▪ Vehicle Detection: The AdaBoost algorithm identifies vehicles within each frame by
detecting unique vehicle features. It creates bounding boxes around detected vehicles,
marking each for further analysis. This module is essential for initiating the tracking and
classification steps, using robust detection to maximize accuracy in varying conditions.
▪ Verification: Using the Intersection over Union (IoU) metric, this module confirms the
accuracy of detected vehicles. IoU compares the predicted bounding boxes with ground
truth to validate each detection, filtering out false positives. This ensures only reliable
vehicle detections are passed to the tracking module.
▪ Vehicle Tracking: The SORT (Simple Online and Realtime Tracking) algorithm tracks
vehicles across frames by assigning unique IDs and monitoring their trajectories. This
module maintains consistent tracking, even when vehicles move through the frame,
enabling accurate speed and density estimations for each detected vehicle.
▪ Traffic Density Estimation: This module calculates the number of vehicles in a given area,
providing insights into traffic congestion. Using tracked vehicles, it estimates real-time
density values, which can help city planners and traffic managers understand traffic flow
patterns, identify bottlenecks, and implement effective road management strategies.
▪ Speed Estimation: By analyzing the displacement of tracked vehicles over time, this
module calculates each vehicle's speed. Speed estimation helps monitor traffic compliance
and identify speed violations, providing data for traffic enforcement and improving safety
on busy roads through real-time speed alerts and notifications.
▪ Vehicle Classification: This module categorizes detected vehicles into types (e.g., car, bus,
truck) based on size, shape, and other characteristics. Classification enables traffic
management to analyze vehicle distribution and assess infrastructure needs, helping
planners accommodate various vehicle types in road design and control.
29 | P a g e
▪ Alert System: Designed to improve road safety, this module generates alerts for detected
events like speeding or unauthorized stopping. By promptly notifying relevant authorities
or users, it enables rapid responses to potential traffic issues, enhancing compliance and
overall safety in real-time.
▪ Data Visualization: Data visualization presents traffic data (like density, speed, and
vehicle types) in user-friendly formats, such as graphs and tables. Visualizations aid
stakeholders in interpreting traffic patterns easily and making data-driven decisions for
traffic control and infrastructure planning.
▪ Report Generation: This module compiles traffic data and analysis into structured reports
(PDF or CSV) for records or further study. Reports provide comprehensive insights into
traffic conditions, facilitating informed decisions in traffic management, urban planning,
and research. They also allow tracking of historical data trends.
▪ Data Storage: The storage module archives processed data and generated reports, ensuring
long-term accessibility for analysis, audits, or historical comparison. By organizing and
securely storing data, it supports effective data management and future retrieval, benefiting
city planning and ongoing traffic studies.
30 | P a g e
ALGORITHMS:
1.AdaBoost Algorithm:
AdaBoost is an ensemble learning technique that combines the predictions of several base
estimators, typically decision trees, to improve overall model accuracy. In this context, it
focuses on misclassified data points in each iteration, adjusting weights to enhance the
performance of weak classifiers. This method is utilized for vehicle detection, making the
system more robust and accurate by emphasizing difficult-to-classify vehicle features.
3.Segmentation Techniques:
Segmentation involves dividing an image into multiple segments (regions) for easier
analysis.
31 | P a g e
DIAGRAM FOR ALL THE ALGORITHMS
This system is designed to monitor and analyze traffic flow. It accomplishes this by capturing
images of traffic scenes, processing them to identify and track individual vehicles, and then
generates various outputs related to traffic density, vehicle classification, speed alerts, and
movement patterns.
1. Image Acquisition:
o The system starts by capturing images of the traffic scene using a camera.
2. Image Preprocessing:
o The captured images are preprocessed to improve their quality and prepare them for
further analysis. This may involve steps like noise reduction, contrast enhancement,
and image resizing.
3. Vehicle Detection (AdaBoost):
o The preprocessed images are fed into a vehicle detection module, which utilizes the
AdaBoost algorithm to identify and locate individual vehicles within the scene.
AdaBoost is a powerful machine learning algorithm that combines multiple weak
classifiers to create a strong classifier for object detection.
4. Vehicle Tracking (SORT):
o Once vehicles are detected, the SORT (Simple Online and Realtime Tracking)
algorithm is employed to track their movement across consecutive frames. SORT
is a robust tracking algorithm that can handle occlusions and variations in
appearance.
5. Traffic Density Estimation:
o Based on the number of detected and tracked vehicles within a specific area, the
32 | P a g e
system can estimate the traffic density in that region.
6. Output Results:
o The system generates various output results, including:
▪ Traffic Density Information: Provides information about the current traffic
density in the monitored area.
▪ Vehicle Classification: Identifies the types of vehicles present (e.g., cars,
trucks, motorcycles).
▪ Speed Alerts: Detects and flags vehicles exceeding a predefined speed limit.
▪ Movement Patterns Logging: Records the movement patterns of vehicles
within the scene, which can be useful for traffic analysis and planning.
7. Verification of Detection Accuracy (IoU):
o To ensure the accuracy of vehicle detection, the system may use the Intersection
over Union (IoU) metric. IoU measures the overlap between the ground truth
bounding boxes of vehicles and the predicted bounding boxes generated by the
detection algorithm. A higher IoU indicates better detection accuracy.
33 | P a g e
CHAPTER 5
5.1DATASET
The dataset comprises 20 meticulously recorded highway traffic videos that offer a
comprehensive representation of various traffic scenarios, ensuring a rich foundation for
vehicle detection and counting tasks. Each video captures traffic across multiple lanes,
providing a diverse array of vehicle types, including cars, trucks, buses, and motorcycles.
This diversity not only enhances the dataset's robustness but also allows for the evaluation
of detection algorithms under varying conditions.
The videos are captured at different times of day, ranging from peak traffic hours to quieter
periods, which introduces variations in vehicle density and flow. This temporal diversity
helps simulate real-world conditions, enabling the development of models that can adapt to
fluctuating traffic patterns. Furthermore, the dataset includes recordings taken under
various weather conditions, such as clear skies, rain, and fog, which are critical for
assessing the performance of vehicle detection systems in challenging environments.
High resolution and clear visibility are paramount in this dataset, ensuring that vehicles are
easily identifiable for accurate detection. The dataset is specifically designed for
implementing the YOLOv3 (You Only Look Once version 3) algorithm, which is renowned
for its speed and accuracy in real-time object detection. By providing bounding boxes
around detected vehicles and tracking their trajectories, the dataset allows researchers to
analyze vehicle movement direction and calculate traffic volume effectively.
In summary, this dataset is an ideal resource for developing and testing advanced vehicle
detection and counting systems, enabling improvements in real-time traffic monitoring and
management strategies. Its comprehensive coverage of traffic conditions and vehicle types
supports the creation of robust algorithms capable of performing accurately in real-world
applications.
34 | P a g e
DATASET PICTURE
35 | P a g e
5.2SAMPLE CODE
Main.py
import os
import detect
import AdaBoost
import cartesian
import numpy as np
import cv2
import sys
import matplotlib.pyplot as plt
VIDEO_SOURCE = sys.argv[1]
MIN_AREA = 250
MAX_DISTANCE = 20
MEDIA_BLUR = 7
BLUR = 7
SENSIBILITY = 10
N_FRAMES_OUT = 15
FRAMES_LEARN = 100
SPEED_THRESHOLD = 15 # Speed threshold for triggering alert
SPEED_NORMAL_THRESHOLD = 10 # Normal speed threshold
36 | P a g e
}
37 | P a g e
def get_speed_status(speed):
"""Return speed status message based on the speed value."""
if speed > SPEED_THRESHOLD:
return "Fast"
elif speed < SPEED_NORMAL_THRESHOLD:
return "Slow"
else:
return "Normal"
38 | P a g e
# Speed status distribution
plt.subplot(2, 1, 2)
statuses = list(speed_status_counts.keys())
counts = list(speed_status_counts.values())
plt.bar(statuses, counts, color=['red', 'yellow', 'green'])
plt.title('Vehicle Speed Status Distribution')
plt.xlabel('Speed Status')
plt.ylabel('Count')
plt.xticks(rotation=45)
plt.grid()
plt.tight_layout()
plt.show(block=True) # Ensure that it blocks until the window is closed
def main():
buffer_vehicles = []
vehicle_counter_left = 0
vehicle_counter_right = 0
vehicle_counter_cars = 0
vehicle_counter_trucks = 0
vehicle_data = [] # To store vehicle counts over time
speed_status_counts = {"Fast": 0, "Normal": 0, "Slow": 0} # To count speed statuses
backsub = cv2.createBackgroundSubtractorMOG2()
backsub = learnSub(backsub, VIDEO_SOURCE, FRAMES_LEARN)
capture = cv2.VideoCapture(VIDEO_SOURCE)
width = int(capture.get(3))
frame_rate = capture.get(cv2.CAP_PROP_FPS)
39 | P a g e
road_lines = ROAD_LINE_MAP.get(VIDEO_SOURCE, DEFAULT_ROAD_LINES)
ROAD_LINE_LEFT, ROAD_LINE_RIGHT = road_lines
cv2.namedWindow('Background')
cv2.moveWindow('Background', 400, 0)
cv2.namedWindow('Track')
previous_centroids = {}
speed_alert = "Normal" # Initialize speed alert status
while True:
frame_id = int(capture.get(1))
ret, frame = capture.read()
if not ret:
break
40 | P a g e
vehicle_counter_left += count_left
vehicle_counter_right += count_right
logger(buffer_vehicles, frame_id, vehicle_counter_left, vehicle_counter_right,
vehicle_counter_cars, vehicle_counter_trucks, speed_alert)
if i in previous_centroids:
speed = calculate_speed(previous_centroids[i], centroid, frame_rate)
41 | P a g e
# Update speed alert if a high-speed vehicle is detected
if speed > SPEED_THRESHOLD and speed_alert != "Fast":
speed_alert = f'{vehicle_type} is going Fast!'
elif speed_alert != "Normal":
speed_alert = "Normal" # Reset to Normal if speed is not high
previous_centroids[i] = centroid
AdaBoost.drawPanel(
frame,
ROAD_LINE_LEFT,
ROAD_LINE_RIGHT,
vehicle_counter_left,
vehicle_counter_right,
width
)
cv2.imshow('Track', frame)
cv2.imshow('Background', bkframe)
if cv2.waitKey(100) == ord('q'):
break
# Visualization of statistics
visualize_statistics(vehicle_data, speed_status_counts)
capture.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
42 | P a g e
main()
Detect.py
import main
import cv2
import AdaBoost
from cartesian import distance
43 | P a g e
'area': area,
'route': 'R'
})
if buffer:
n_left, n_right, buffer = countVehicles(buffer, frame_id, main.MAX_DISTANCE,
main.N_FRAMES_OUT)
44 | P a g e
else:
j += 1
i += 1
if final:
for vehicle in buffer:
if vehicle['route'] == 'L':
count_left += 1
else:
count_right += 1
buffer = []
Adaboost.py
import cv2
cv2.line(frame, road_line_left[0], road_line_left[1], (100, 255, 0), 3) # Left road line in green
cv2.line(frame, road_line_right[0], road_line_right[1], (0, 165, 255), 3) # Right road line in
orange
45 | P a g e
cv2.putText(frame, f'Right Lane Count: {count_right}', (10, 60),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 165, 255), 2) # Right lane count in orange
cv2.putText(frame, f'Total Count: {count_left + count_right}', (10, 90),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
Cartesian.py
import math
def distance(p1, p2):
return math.sqrt((p1[0]-p2[0])**2 + (p1[1]-p2[1])**2)
def module(vector):
return math.sqrt(vector[0]**2 + vector[1]**2)
def angleVectors(u, v):
scalar = u[0] * v[0] + u[1] * v[1]
mod = module(u) * module(v)
angle = math.cos(scalar/mod)
return math.degrees(angle)
46 | P a g e
FORMULAS USED HERE
Distance Function
Purpose: Calculates the Euclidean distance between two points, p1 and p2, in a 2D Cartesian
coordinate system.
Formula Used:
• The distance d between two points d=(x2−x1)2+(y2−y1)2
• This formula is derived from the Pythagorean theorem.
Vector Function
Purpose: Computes the vector from point p2 to point p1.
Formula Used: The vector v\mathbf{v}v from point p2(x2,y2) to point p1(x1,y1) is given by
• v=(x1−x2,y1−y2)
• This represents a directional change from one point to another.
Module Function
Purpose: Calculates the magnitude (or length) of a vector.
Formula Used:
• The magnitude |∣v∣|of a vector v=(x,y) is given by:
• ∣∣v∣∣=x2+y2
• This formula again stems from the Pythagorean theorem, as the vector's components form
a right triangle with the axes.
47 | P a g e
5.3 SAMPLEOUTPUT:
48 | P a g e
This output is for video4
49 | P a g e
5.4TEST PLAN AND DATA VERIFICATION
Goal
The main goal of the testing process is to evaluate the accuracy and dependability of the
Traffic Monitoring System for vehicle detection, counting, classification, and alert
generation.
Testing Methodology
1. Data Verification
The data verification process ensures the validity and integrity of the dataset, including
video footage and annotations. The following elements will be verified:
Data Completeness: Ensure that the video files are intact and properly formatted.
Data Accuracy: Verify that the vehicle detection and classification outputs are correct.
50 | P a g e
2. Output Verification
The output verification process ensures that the system generates the correct output based on the
detected vehicles and their speeds.
Acceptance Criteria
The system is considered acceptable if:
• The video footage is loaded and processed without errors.
• The vehicle count and classification in the output match the actual data.
• Alerts for speeding vehicles are generated accurately.
CHAPTER 6
51 | P a g e
RESULTS
6.1RESEARCH FINDINGS
Expected Outcomes:
The expected outcomes for the Traffic Monitoring System project focus on enhancing
traffic management and improving road safety. One of the primary goals is to achieve
improved vehicle detection accuracy through the implementation of the AdaBoost
algorithm and effective image preprocessing techniques, allowing for reliable vehicle
identification in various traffic conditions, including differing weather and lighting
scenarios. The system is designed to provide real-time vehicle counting, which can be
validated against manual counting methods to ensure effectiveness.
Additionally, the project aims to classify detected vehicles into categories such as cars,
trucks, buses, and motorcycles, facilitating better analysis and management of traffic
flows. It will also incorporate speed estimation capabilities by analyzing vehicle
displacement over time, enabling effective monitoring of speed limits and identification
of speed violations. This feature contributes to the overall goal of enhancing road safety,
as the system will generate alerts for vehicles exceeding speed limits, allowing for
prompt responses from traffic enforcement agencies.
The project will yield insights into traffic density levels, assisting urban planners and
traffic managers in understanding congestion patterns and making informed decisions
regarding infrastructure improvements. Data visualization will play a crucial role, as the
system will produce user-friendly visualizations of traffic data, such as counts, speeds,
and classifications, aiding stakeholders in interpreting traffic patterns and making data-
driven decisions.
Furthermore, the ability to generate structured reports on traffic conditions and patterns
will provide valuable information for urban planning and traffic management efforts,
supporting ongoing studies and policy-making. Finally, the architecture of the system
will be designed for scalability and adaptability, allowing for the addition of more
cameras and monitoring locations without significant redesign, ultimately contributing to
the development of smarter cities.
52 | P a g e
6.2RESULT ANALYIS
53 | P a g e
This is output for video4
54 | P a g e
Analyzing the Traffic Monitoring System Results
Understanding the Data
From the provided charts, we can glean the following information:
Vehicle Count Over Time:
• The plot shows a step-wise increase in the total vehicle count over time.
• This indicates that the system is successfully detecting and tracking vehicles as they enter
the scene.
• There are periods of stability where no new vehicles enter, followed by sudden jumps,
suggesting batches of vehicles entering the scene.
Vehicle Speed Status Distribution:
• The bar chart shows the distribution of vehicles based on their speed status:
o Fast: The majority of vehicles are classified as "fast."
o Normal: A smaller proportion of vehicles are classified as "normal."
o Slow: The least number of vehicles are classified as "slow."
Evaluation Metrics
To evaluate the performance of the traffic monitoring system, we can use the following metrics:
Tracking Accuracy:
• Multiple Object Tracking Accuracy (MOTA): Measures the overall accuracy of tracking
multiple objects over time.
• Mostly Tracked (MT): Percentage of ground truth tracks that are mostly tracked by the
system.
• Mostly Lost (ML): Percentage of ground truth tracks that are mostly lost by the system.
Data Collection and Ground Truth
To calculate these metrics, we need ground truth data, which involves manually annotating vehicles
in the video frames with their:
• Bounding boxes: Defining the spatial extent of each vehicle.
• Class labels: Identifying the type of vehicle (car, truck, etc.).
• Speed: The actual speed of each vehicle.
55 | P a g e
CONCLUSIONS AND FUTURE WORK
The traffic monitoring project successfully demonstrated the ability to detect, classify, and track
vehicles in real-time using computer vision and machine learning. By analyzing video data, the
system effectively counts vehicles, estimates traffic density, and generates alerts for speeding
incidents, enhancing road safety and informing urban planning. Future work could focus on
integrating advanced algorithms, like CNNs, for improved accuracy, and expanding to real-time
traffic prediction models. Additionally, incorporating IoT technologies and enhancing user
interfaces will provide comprehensive data analytics for smarter traffic management solutions,
further optimizing traffic flow and safety on the roads.
56 | P a g e
REFERENCES
[1] A. Gutierrez-Torre et al., "Automatic Distributed Deep Learning Using Resource-Constrained
Edge Devices," IEEE Internet of Things Journal, vol. 9, no. 16, pp. 15018-15029, Aug. 2022, doi:
10.1109/JIOT.2021.3098973.
[2] M. A. Amirabadi et al., "Deep Neural Network-Based QoT Estimation for SMF and FMF
Links," Journal of Lightwave Technology, vol. 41, no. 6, pp. 1684-1695, March 2023, doi:
10.1109/JLT.2022.3225827.
[3] D. Bega et al., "DeepCog: Optimizing Resource Provisioning in Network Slicing With AI-
Based Capacity Forecasting," IEEE Journal on Selected Areas in Communications, vol. 38, no. 2,
pp. 361-376, Feb. 2020, doi: 10.1109/JSAC.2019.2959245.
[4] A. A. Ahmed et al., "An Optimized Deep Neural Network Approach for Vehicular Traffic
Noise Trend Modeling," IEEE Access, vol. 9, pp. 107375-107386, 2021, doi:
10.1109/ACCESS.2021.3100855.
[5] A. G. M. Mengara et al., "IoTSecUT: Uncertainty-Based Hybrid Deep Learning Approach for
Superior IoT Security Amidst Evolving Cyber Threats," IEEE Internet of Things Journal, vol. 11,
no. 16, pp. 27715-27731, Aug. 2024, doi: 10.1109/JIOT.2024.3404808.
[8] Y. Liu et al., "Privacy-Preserving Traffic Flow Prediction: A Federated Learning Approach,"
IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7751-7763, Aug. 2020, doi:
10.1109/JIOT.2020.2991401.
57 | P a g e
[9] G. Muhammad et al., "Stacked Autoencoder-Based Intrusion Detection System to Combat
Financial Fraudulent," IEEE Internet of Things Journal, vol. 10, no. 3, pp. 2071-2078, Feb. 2023,
doi: 10.1109/JIOT.2020.3041184.
[10] F. M. Awan et al., "Using Noise Pollution Data for Traffic Prediction in Smart Cities:
Experiments Based on LSTM Recurrent Neural Networks," IEEE Sensors Journal, vol. 21, no. 18,
pp. 20722-20729, Sept. 2021, doi: 10.1109/JSEN.2021.3100324.
[11] I. Guarino et al., "Explainable Deep-Learning Approaches for Packet-Level Traffic Prediction
of Collaboration and Communication Mobile Apps," IEEE Open Journal of the Communications
Society, vol. 5, pp. 1299-1324, 2024, doi: 10.1109/OJCOMS.2024.3366849.
[12] G. Kakkavas et al., "Generative Deep Learning Techniques for Traffic Matrix Estimation
From Link Load Measurements," IEEE Open Journal of the Communications Society, vol. 5, pp.
1029-1046, 2024, doi: 10.1109/OJCOMS.2024.3358740.
[13] S. Bilotta et al., "Short-Term Prediction of City Traffic Flow via Convolutional Deep
Learning," IEEE Access, vol. 10, pp. 113086-113099, 2022, doi: 10.1109/ACCESS.2022.3217240.
[14] A. Shabbir et al., "Smart City Traffic Management: Acoustic-Based Vehicle Detection Using
Stacking-Based Ensemble Deep Learning Approach," IEEE Access, vol. 12, pp. 35947-35956,
2024, doi: 10.1109/ACCESS.2024.3370867.
[15] J. Huo et al., "Quantify the Road Link Performance and Capacity Using Deep Learning
Models," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 18581-
18591, Oct. 2022, doi: 10.1109/TITS.2022
58 | P a g e
Appendices
Total In kind or Requested
Items listed below are for example only Cost Match
(what you
already
have)
NA NA NA
Facility Expenses
Utilities NA NA NA
maintenance (cleaning) NA NA NA
Internet connection 200/- NA NA
Supplies
office supplies (provide details such as $20 NA NA NA
per month x 12 months)
Workbooks NA NA NA
arts & crafts supplies NA NA NA
Software NA NA NA
classroom supplies (for students and NA NA NA
teachers)
Total Supplies
Equipment
desktop computers NA NA NA
laptop computers NA NA NA
Printer NA NA NA
Scanner NA NA NA
59 | P a g e
Chairs NA NA NA
digital camera NA NA NA
Total Equipment
Contractual
Total Contractual
Communications
Telephone NA NA NA
long distance NA NA NA
cellular phones NA NA NA
Postage NA NA NA
Internet NA NA NA
Total Communications
Other Expenses NA NA NA
60 | P a g e