paper[1]
paper[1]
1. Resize image.
2. Run convolutional network. more congested lanes are allocated
3. Non-max suppression.
longer green-light times, and less
busy lanes get shorter waiting times,
which enhances the overall
Figure 1: The YOLO Detection System. efficiency..
Processing images with YOLO is simple YOLO is refreshingly simple: see
and straightforward. Our system (1) Figure 1. The remainder of this paper
resizes the input image to 640×640, (2) is organized as follows: Section 2
runs a single convolutional network on discusses related work in the field of
the image, and (3) thresholds the intelligent traffic control. Section 3
resulting detections by the model’s presents the system architecture,
confidence. detailing the hardware and software
components used. Section 4 explains
the methodology, including image
With the development of artificial acquisition, traffic density analysis,
intelligence, deep learning-based object and dynamic signal control. Section 5
detection models have proven to be describes the implementation and
highly promising in transforming traffic experimental results obtained from
management. The YOLO (You Only Look testing the system under various
Once) model has emerged as a highly traffic conditions. Section 6 discusses
efficient deep learning architecture for the conclusions and future directions
real-time object detection due to its for enhancing the system’s
ability to process images quickly and capabilities, including edge AI
accurately. Unlike traditional image- deployment and IoT integration for
processing techniques, YOLO can detect smart city applications.
multiple vehicles simultaneously and
provide precise bounding boxes, making
it an ideal choice for real-time traffic 2. Object Detection and the YOLO
analysis. This research aims to integrate Approach
the YOLO model with a low-cost
Object detection is a critical task in
hardware system based on the Arduino
computer vision that involves
microcontroller and an OV7670 camera
identifying and localizing multiple
module. By leveraging real-time image
objects within an image. Traditional
acquisition and AI-driven traffic density
object detection approaches, such as
analysis, the system dynamically adjusts
region-based convolutional neural
signal timings to optimize traffic flow
networks (R-CNN), rely on multi-stage
and reduce congestion.
pipelines involving region proposal,
The system designed includes an feature extraction, and classification.
OV7670 camera for live traffic image Although effective, these methods
capture, a YOLO-based deep learning suffer from high computational costs
algorithm for the detection of vehicles, and slow inference speeds, making
and an Arduino-based traffic light them impractical for real-time
controller. The images are analyzed in applications such as autonomous
real-time by a trained YOLO model that driving, surveillance, and intelligent
detects and counts vehicles at an traffic control.
intersection. According to the traffic
To overcome these limitations, the
density that has been detected, an
You Only Look Once (YOLO)
adaptive algorithm calculates the most
framework was introduced as a real-
time object detection model that labelled classes so C = 20. Our final
significantly improves both accuracy and prediction is a 7 × 7 × 30 tensor.
speed. Unlike traditional region
proposal-based architectures, YOLO
adopts a single convolutional neural 2.1. Network Design
network (CNN) to simultaneously We develop this model using a
predict multiple bounding boxes and convolutional neural network and
class probabilities for objects in an assess its performance on the PASCAL
image. By framing object detection as a VOC detection dataset [9]. The
direct regression problem, YOLO network’s early convolutional layers
eliminates the need for complex feature are responsible for extracting image
extraction and region proposal steps, features, while the fully connected
enabling fast and efficient detection in a layers predict the final output,
single forward pass through the including class probabilities and
network. bounding box coordinates.
2x2-s-2 Layer
3
112 3
56 3
28
14
Figure 3: Conv. Layer Conv. Layer Conv. Layers Conv. Layers Conv. Layers Conv. Layers Conn. Layer Conn. Layer