0% found this document useful (0 votes)
21 views12 pages

JOIN Jurnal Revisi

The document discusses an AI writing assessment report indicating that 26% of the submitted text is likely AI-generated, emphasizing the need for human judgment in evaluating academic integrity. It also outlines the use of YOLOv5 and YOLOv8 models for real-time parking space detection, comparing their performance based on accuracy, precision, recall, and inference time. The research aims to enhance smart parking management systems by leveraging deep learning techniques for improved vehicle detection in various environments.

Uploaded by

One'Trust ID
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views12 pages

JOIN Jurnal Revisi

The document discusses an AI writing assessment report indicating that 26% of the submitted text is likely AI-generated, emphasizing the need for human judgment in evaluating academic integrity. It also outlines the use of YOLOv5 and YOLOv8 models for real-time parking space detection, comparing their performance based on accuracy, precision, recall, and inference time. The research aims to enhance smart parking management systems by leveraging deep learning techniques for improved vehicle detection in various environments.

Uploaded by

One'Trust ID
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:3166627395

Mulfu
Mulfu Cek Persentase AI
Mulfu

Mulfu

Mulfu

Document Details

Submission ID

trn:oid:::1:3166627395 10 Pages

Submission Date 5,313 Words

Feb 26, 2025, 11:08 AM GMT+2


29,064 Characters

Download Date

Feb 26, 2025, 11:09 AM GMT+2

File Name

FcL8yCeOFZtKl7l2jVsO.pdf

File Size

562.8 KB

Page 1 of 12 - Cover Page Submission ID trn:oid:::1:3166627395


Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:3166627395

26% detected as AI Caution: Review required.

The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.

Detection Groups
1 AI-generated only 26%
Likely AI-generated text from a large-language model.

2 AI-generated text that was AI-paraphrased 0%


Likely AI-generated text that was likely revised using an AI-paraphrase tool
or word spinner.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.

Frequently Asked Questions

How should I interpret Turnitin's AI writing percentage and false positives?


The percentage shown in the AI writing report is the amount of qualifying text within the submission that Turnitin’s AI writing
detection model determines was either likely AI-generated text from a large-language model or likely AI-generated text that was
likely revised using an AI-paraphrase tool or word spinner.

False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.

AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).

The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.

What does 'qualifying text' mean?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be likely AI-generated will be
highlighted in cyan in the submission, and likely AI-generated and then likely AI-paraphrased will be highlighted purple.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

Page 2 of 12 - AI Writing Overview Submission ID trn:oid:::1:3166627395


Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika)


p-ISSN: 2528-1682, e-ISSN: 2527-9165
Volume 8 Number 1 | June 2023: 1-3
DOI: 10.15575/join.xxxxx.xx

Real-Time Parking Space Detection: Performance


Comparison of YOLOv5 and YOLOv8
Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3
1,2,3
Department of Information Technology, Telkom University Surabaya, Indonesia

Article Info ABSTRACT


The emergence of urban agglomerations has made finding parking slots
Article history: increasingly tricky, thus creating a need for suitable parking
Received Sep 3, 2019 management systems. This paper is focused on the design and
development of a prototype detection system that automatically
Revised May 17, 2020
identifies open and occupied slots in a parking lot using deep learning
Accepted June 28, 2020 methods. For the analysis, 61 images depicting various parking lots
during the day were used. The set is split with 70% of the data for
Keywords: training, 20% for validation, and 10% for testing with augmentation to
increase model performance. Images were prepared with bounding box
Deep Learning annotations in the YOLO format, and all slots were classified as either
Object Detection occupied or free. The images were prepared in Roboflow. The research
Parking Availability is based on a straightforward procedure consisting of the following
stages: 1. Preparation—gathering and classifying images; 2. Labelling—
YOLOv8
identification and detailed labelling of parking lots; 3. Training—a time
YOLOv5 series forecasting model with hyperparameter optimization for YOLOv5
and YOLOv8 over 200 epochs; 4. Comparison—assessment of the
developed model based on mAP@50, precision, recall, and inference
time. The data indicates that while YOLOv8 has a more advanced recall
and inference speed, which allows it to be best for real-time detection,
YOLOv5 is superior in precision, accuracy, and reducing false favourable
rates. Through the confusion matrix, it can be seen that, unlike YOLOv5,
YOLOv8 performed better in differentiating between vacant spaces and
parked vehicles. The research established that, while YOLOv5 does the
best job at minimizing false detection, YOLOv8 is more effective in multi-
environment settings.

Corresponding Author:
Rafi Ardinata Riskiansyah,
Information Technolgy Department, Faculty of Informatics, Telkom University Surabaya
Jl. Ketintang No. 156, Ketintang, Kec. Gayungan, Surabaya, East Java, Indonesia. 60231
Email: [email protected]

1. INTRODUCTION

A place allocated for a vehicle to stop while one takes a break from driving is referred to as a parking
lot. It is an integral component of our transportation system and when used effectively, facilitates parking for
all [1]. The same goes for outdoor parking lots. Due to low visibility, drivers almost always struggle to find an
empty parking slot. This results in constantly driving the vehicle around the parking area in search of space [2].
Researchers and practitioners are adopting new technologies such as computer vision (CV) and deep
learning (DL) for automated monitoring of parking areas. This enables real-time observation of parking spaces,
which improves resource allocation. Of the many object detection algorithms, the You Only Look Once (YOLO)
approach is the most popular and effective of all object detection methods. By doing this, it automatically
localizes and classifies objects in the image in a single step, making it one of the most accurate systems today
[3]. YOLO achieves a remarkable increase in accuracy compared to other systems in identifying objects. Less

Page 3 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika) p-ISSN: 2528-1682


e-ISSN: 2527-9165

accurate systems require a series of different stages, which makes them slow. Unlike older systems, YOLO
allows for instant implementation [4]. This is especially true because the YOLO system performs well in
identifying cars, helping to create more advanced and inexpensive parking equipment solutions [5].
Along with improvements in speed, accuracy, and reliability in various environments, YOLO has
undergone changes from previous versions to the current version. The object detection models that have
gained much attention in both research and practical work are YOLOv5 and YOLOv8. YOLOv5 is well-known
for its effective object detection, which significantly reduces the probability of false positives. Though YOLOv5
achieves a high level of precision, it often has challenges with recall. This means that it loses out on detecting
some important objects, which is crucial for applications or systems where safety is a priority [6]. On the other
hand, YOLOv8 is said to perform significantly better than the other models in terms of recall metrics, which
means it is able to identify more relevant objects, especially in low-light or highly cluttered environments [7].
The careful integration of the YOLOv8 object recognition system into the existing framework of the
intelligent parking management system has resulted in an unprecedented increase in system speed. The system
is now capable of processing up to 45 frames per second (FPS). This significantly improves operator
responsiveness in open parking lot scenarios [8]. In addition, the high accuracy value of guard detection
increases reliability when separating occupied and unoccupied slots that are poorly illuminated and filled with
obstacles. The complete integration of YOLOv8 with OpenCV does provide certain benefits, such as optimizing
the system for video stream tracking and providing feedback to users in real-time [9].
The purpose of this research is to design and implement an autonomous system to count the number
of parked vehicles by comparing the performance of YOLOv5 and YOLOv8. The developed system will utilize
OpenCV as the main framework for image processing and integration with object detection. The comparison
between these two models will be based on some core benchmarks such as detection accuracy (mAP@50),
precision, recall, and inference time [10]. By analyzing these two models, this research hopes to answer the
question of which model is more suitable for the implementation of computer vision-based smart parking
systems.
Several previous studies have tried to address the parking management problem by applying
computer vision and deep learning. Novandra Rizkatama et al. (2021), for example, focused on developing an
intelligent vehicle counter and parking space detection system where vehicles could be detected using a
hypercube in YOLOV4, achieving 72.8 percent accuracy. However, this work had difficulty in detecting
occlusions where overlapping objects are visible through a certain region [11]. Though their system was able
to achieve a 72.8% accuracy while using YOLOv4, it had issues with object occlusion, which made it less
effective in busy environments. The issue is in dealing with occluding objects, which YOLOv8 improves upon
for recall and feature learning. In a similar study, Lestari et al. (2023) built a basic OpenCV system to monitor
parking space occupancy rates and claimed to have a 92.6% detection rate when examining pre-recorded
footage for parked cars [12]. The system developed using OpenCV was able to achieve an accuracy rate of 92.6%
but was unable to achieve real-world performance due to the absence of low-light scenarios. The weakness
here is the robustness to low-light conditions achieved by YOLO models, especially YOLOv8, with better feature
extraction.
In a different experiment, Tanuwijaya and Fatichah (2020) applied a combination of AlexNet with
YOLO technology for the automatic marking of free parking spaces as well as vehicle occupancy detection in
CCTV videos. The average accuracy achieved was 93.48% and it was found that the system was effective in
medium and low illumination [13]. Despite achieving 93.48% accuracy, the effectiveness of the AlexNet-YOLO
combination has only been evaluated under test light conditions. This gap also appears in the model's ability
to adapt to light and environmental variations, which we address through data augmentation and the adaptive
architecture of YOLOv8. In contrast, Calista et al. (2023) used an image-processing approach with the HOG
technique to identify parking slots in a supermarket [14]. Their HOG-based system relies on precise camera
placement and careful edge detection, making it difficult to adapt to different types of parking lot
configurations. This drawback is the need for a model that achieves good performance across multiple contexts,
something that is at the core of the YOLO architecture.
More recently, Primasari et al. (2024) developed a smart traffic light system and integrated it with
YOLOv8 and achieved 68.9% accuracy as a true positive. While their work demonstrates the potential of
YOLOv8 to detect objects in real-time, the model has not yet achieved optimal performance and requires

Detection Outdoor Car Parking Availability Using Deep Learning YOLOv8 2


Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3

Page 4 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN | Volume 5 No. 1 | June 2020: 1-3

additional data and training [15]. This highlights the gap in achieving high accuracy with limited data, which
we addressed through careful hyperparameter tuning and data augmentation.
Through this research, it aims to evaluate the effectiveness of the two YOLO models in detecting the
availability of outdoor parking spaces. The systems implemented are YOLOv5 and YOLOv8 with OpenCV to
provide an efficient, scalable, intelligent parking solution that can adapt to different environmental conditions.
Therefore, it is expected that this research will contribute to the development of a smarter and more efficient
transportation system.
This research is organized into four key phases: (1) Preparation, in which relevant data is gathered
and arranged; (2) Labeling, in which the parking slots and the vehicles are labelled using bounding boxes; (3)
Training, in which YOLOv5 and YOLOv8 models are trained, optimized, and tuned through hyperparameters;
and (4) Comparison, in which both models are tested against important metrics to assess their application to
real-life parking detection situations.

2. METHOD

2.1. Data Preparation


The research dataset is composed of photographs and videos captured by a drone in external parking
lots during the day. Altogether, 61 frames were extracted from the video recordings, as well as combined with
aerial shots, all annotated with marker boxes for cars and parking spots. The data set was split into three parts:
70% for training, 20% for validation and 10% for testing of the models. In order to increase the diversity of the
dataset and the generalization ability of the model, augmentation methods such as rotation, inversion, and
brightness modification were performed [16].

2.2. Dataset Labelling


The collected images were labeled with the use of bounding boxes to designate vehicles and parking
lots. The labeling step was performed using Roboflow, which is an open-source graphical image annotation
tool. Each vehicle is marked with a bounding box and the parking lot is divided into vehicles and empty (like
the car in Figure 1, done in Roboflow). The labeled information is in YOLO format, consisting of the class id with
the normalized coordinates of the bounding box.

Figure 1. Labeled Image (Red: cars, Purple: empty)

The systematic approach applied aims to improve the accuracy and precision achieved with the labeled
data. Labeling of all captured images was done meticulously through the Roboflow platform. Vehicles present
in the parking lot were labeled with many different bounding boxes, further categorized into Cars and empty
parking lots. To improve the reliability of high-quality annotations, there was an additional process where each
labeled image was reviewed for accuracy and bounding boxes were changed where any discrepancies were
detected. The final annotation dataset is exported in YOLO format for easy use with the object detection deep
learning model. Figure 2 illustrates the distribution of labels in the dataset used to train the object detection
model. This bar graph shows the number of instances of the two categories, using blue bars to designate “cars”
and light blue to designate “empty.”.

Page 5 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika) p-ISSN: 2528-1682


e-ISSN: 2527-9165

Figure 2. Data Distribution for Training Figure 3. Dataset Sample

2.3. YOLOv8 Architecture


YOLO (You Only Look Once) is an object detection algorithm released in 2015 created by Joseph
Redmon and Ali Farhadi. YOLO processes the entire image in one forward pass through a Convolutional Neural
Network (CNN) and predicts the bounding box of the object with class probability in realtime [17]. YOLO can
perform a realtime object recognition with a speed accuracy of 45 frames per-second [8].
YOLOv8 features an architecture equipped with three detection heads [18]. Each head is designed to
detect objects of different scales. The first head, P3, specializes in detecting small objects, while the second
head, P4, focuses on medium-sized objects. The third head, P5, is responsible for detecting large objects. In this
study, object detection is performed on video footage (refer to Figure 3). The proposed architecture is available
in Figure 4.

Figure 4. YOLOv8 Model Architecture Layer

Detection Outdoor Car Parking Availability Using Deep Learning YOLOv8 4


Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3

Page 6 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 7 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN | Volume 5 No. 1 | June 2020: 1-3

2.4. YOLOv5 Architecture


The YOLOv5 architecture, as depicted in Figure 5, can be divided into three main components:
Backbone, Neck, and Head. The Backbone is responsible for feature extraction. It implements a series of
BottleNeckCSP (Cross Stage Partial Networks) layers that improve gradient flow and minimize computation
simultaneously. In addition, a Spatial Pyramid Pooling (SPP) module is also included to increase the reception
field and improve object detection performance [19].
Neck further enhances the feature map by performing a combination of Concatenation (Concat) and
UpSampling layers useful in multi-scale feature fusion. It also contains Extra BottleNeckCSP and 1x1
Convolution layers to customize the depth and width of the feature map. The Conv3x3 S2 combination ensures
that spatial information is retained even when resolution is reduced for efficient processing [20].
Finally, Head performs object classification and bounding box regression. It consists of multiple
Conv1x1 layers applied to the enhanced feature map to predict object category, confidence score, and location
coordinates. The YOLOv5 architecture is designed for real-time object detection with a compromise between
accuracy and inference speed.

Figure 5. YOLOv5 Model Architecture Layer

2.5. Recall
Recall, also known as Sensitivity or True Positive Rate (TPR), measures the model's ability to correctly
identify all relevant instances (true positives) from the total number of actual positives. It is particularly useful
in scenarios where missing a positive instance (e.g., a vehicle in a parking space) is costly, with the formula as
in (1) [21].

𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (𝑇𝑃)


𝑅𝑒𝑐𝑎𝑙𝑙 = (1)
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (𝑇𝑃)+𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒𝑠 (𝐹𝑁)

2.6. Precision
Precision evaluates how well the model isolates relevant instances (true positives). Of all the instances
predicted as positive, only the relevant instances will actually be correctly identified - true positives. This is
important in scenarios involving false positives, such as identifying empty spaces as filled (2) [21].

𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (𝑇𝑃)


𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (2)
𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (𝑇𝑃)+𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝑠 (𝐹𝑃)

2.7. Mean Average Precision


Mean Average Precision (mAP) is defined as a single metric in the performance evaluation of object
detection models. It considers the average precision (AP) of multiple classes defined by different Intersection
over Union (IoU) values, and then averages those values for all classes. mAP is popularly used in object
detection tasks due to the fact that it combines precision and recall, and is calculated as in (3) [22].

Page 7 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 8 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika) p-ISSN: 2528-1682


e-ISSN: 2527-9165

1
𝐴𝑃 = ∫0 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛(𝑟)𝑑𝑟 (3)

The mAP value is obtained by averaging the AP values of all classes; the combined average AP (mAP)
value increases when one of the AP values is higher. The formula for calculating mAP is as follows (4) [22].

1
𝑚𝐴𝑃 = ∑𝑁
𝑖=1 𝐴𝑃𝑖 (4)
𝑁

2.8. Model Training


The YOLOv5 and YOLOv8 model was trained and hyperparameterized with the aim of maximizing the
performance and accuracy of the model. The model was trained for 200 epochs as it was believed that the set
number of epochs would allow sufficient convergence with minimal overfitting issues. The input image size
(imgsz) was set to 1280x1280 pixels to ensure that the model can acquire enough details that are important
for outdoor object detection, especially when involving relatively small-sized objects such as parked cars.
This is done so that there is a balance between the computational memory required for the model and
also ensuring that training is stable. Both the initial learning rate (lr0) and the final learning rate (lrf) are set to
.01 which means that the model uses a decreasing gradient with a constant learning rate. This is important as
it ensures the model can adaptively adjust the weights, but more importantly, it minimizes overly aggressive
oscillations that can break convergence.
Hyperparameter optimization was performed with the aim of maximizing detection accuracy with
respect to available resources. The following are the hyperparameters in Table 1.

Table 1. Training Hyperparameters


Hyperparameter Value
Epoch 200
Image Size 1280
Batch 8
Initial Learning Rate 0.01
Final Learning Rate 0.01

2.9. Comparison of Training Results


To find out how effective the developed parking detection system is, an experiment is set up comparing
the performance of two YOLO models, specifically YOLOv5 and YOLOv8, with the same set of hyperparameters.
In addition, this was also done for detections with different Intersection over Union (IoU) threshold values to
see how it affects the detection metrics.
The datasets used for this evaluation include images taken during bright daylight to thoroughly
evaluate the reality adaptability of each model. In this study that uses YOLOv5 and YOLOv8 performance
analysis, the goal is to find out which model is best for handling different IoU levels and examine the real-time
detection of occupied and empty parking lots.

3. RESULT AND DISCUSSION

This paper presents the training and validation process of YOLOv5 and YOLOv8 models on a
customized dataset created from videos. In the first step, a set of 61 images is prepared and labeled with respect
to empty and occupied parking slots.
To improve the performance of the model, several data augmentation approaches such as rotation,
brightness change, and inverse modification were applied. These changes to the dataset will help improve the
performance of the model under various conditions. The datasets were created with proportions of 70%, 20%,
and 10% for training, validation, and testing respectively for optimal learning.

3.1. Experiment Result Comparison


This study considers a detailed analysis of the performance metrics of YOLOv5 as well as the metrics
for the revamped YOLOv8 model after 200 training cycles. The comparison focuses on some vital metrics, which
include Recall, Precision, Average Precision (mAP), and Inference Time. These metrics are important in

Detection Outdoor Car Parking Availability Using Deep Learning YOLOv8 6


Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3

Page 8 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 9 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN | Volume 5 No. 1 | June 2020: 1-3

determining the efficiency of the model for real-time object detection. As summarized in the Results section in
Table 2.

Table 2. Results in the Comparison Table of the Yolov5 and Yolov8 Models
Framework mAP Precision Recall Inference Time
Yolov5 97,71 % 96,59 % 96,54 % 2,5 ms
Yolov8 95,99 % 94,63 % 97,81 % 1,7 ms
Difference -1,72 % -1,96 % +1,04 % 0,8 ms

From the table showing the performance comparison of YOLO v8 and YOLO v5, one can deduce a
number of important points. YOLOv5 has a better mAP@50 rate than YOLOv8 with 97.71%, which means that
the YOLOv8 version is worse at detecting items than v5. Regarding precision, YOLOv5 also outperformed
YOLOv8 with 96.59% while YOLOv8 scored 94.63%. This means that with regards to false positives, the
YOLOv5 system performs better than v8. However, the recall of YOLOv8 is higher than YOLOv5 with 97.81%
and 96.54%. This means that more objects in an image are captured by the system when using v8. In addition,
YOLOv8 has a lower inference time of 2.0 ms compared to 1.7 ms of YOLOv5. Therefore, YOLOv8 is more
suitable for real-time applications.

3.1.2. Confussion Matrix Analysis

Figure 6. Confussion Matrix Normalized YOLOv8 Figure 7. Confussion Matrix Normalized YOLOv5

Based on the normalized confusion matrix image above, there are significant differences in model
performance. In Figure 5, the model shows better performance with fewer classification errors. This is
indicated by the main diagonal value getting closer to 1.00, indicating that the model can classify objects more
accurately. In addition, the values outside the main diagonal are smaller, which means that there are fewer
prediction errors compared to Figure 6. For example, in the “cars” class, the number of errors in classifying
objects as “background” is lower compared to Figure 6.
Meanwhile, in Figure 6, the model experiences more classification errors, as seen from the larger
values outside the main diagonal. One striking example is the “cars” class, which has a value of 0.32 in the
“background” cell, which indicates that the model more often misclassifies cars as part of the background. In
addition, Figure 6 shows a more even color distribution, which indicates a higher level of confusion in the
model's ability to distinguish between classes.
Overall, the model in Figure 5 performs better than the model in Figure 6. The model in Figure 5 is
more accurate in recognizing each class and has fewer classification errors, especially in distinguishing “cars”
from the background. The model in Figure 5 is more recommended because it has a higher level of precision
and lower classification errors.

3.1.3. Class-Specific Model Performance Analysis


A comparative analysis of the performance of YOLOv5 and YOLOv8 models in detecting the occupancy
of a parking space for Cars (parked cars) and Empty (available parking spot) classes. The evaluation was

Page 9 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 10 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika) p-ISSN: 2528-1682


e-ISSN: 2527-9165

conducted using these three metrics - Precision, Recall, and mAP which reflect the accuracy and reliability of
the device model in object detection. It can be seen in the Table 3 that all metrics of YOLOv5 and YOLOv8 show
improvement in the performance of YOLOv8 over YOLOv5.

Table 3. Model Performance Comparison Results between Classes

Framework Class Precision Recall mAP


Cars 84,88 % 73,00 % 52,52 %
Yolov5
Empty 94,90 % 84,55 % 74,16 %
Cars 93,02 % 81,63 % 61,91 %
Yolov8
Empty 95,92 % 87,04 % 76,36 %
Cars +8,14 % +8,63 % +9,39 %
Difference
Empty +1,02 % +2,49 % +2,2 %

From the comparison results between the performance of YOLOv5 and YOLOv8 lags, it can be seen
that there is a major difference in performance increase regarding class Cars and class Empty. For class Cars,
precision increased by 8.14%, recall increased by 8.63%, and mAP increased by 9.39%. This suggests that
YOLOv8 is able to identify parked cars more accurately and commits fewer errors in classifying them. With a
higher recall, YOLO8 is also better able to capture all the cars within the images compared to YOLOv5, lessening
the number of missed vehicles in the detection (false negatives). On the other hand, the progress seen in the
Empty class was not as much as that of Cars, where precision increased by only 1.02%, recall increased by
2.49%, and mAP increased by 2.2%. This suggests that in reality, YOLOv5 is already performing decently in
recognising empty parking spaces, with YOLOv8 providing only minor tweaks. This discrepancy indicates that
indeed, YOLOv8 is better at dealing with more complicated objects, such as cars, while for empty parking lots,
the previous model was already quite efficient. In summary, the performance of YOLOv8 in capturing details of
cars is the most important in a parking space detecting system to avoid being inaccurate in defining blank areas
against ones occupied by vehicles. In view of this, there is a better need for refraining to YOLOv8 in order to
increase the efficiency of the parking space availability detection system.

3.2. Discussion
The analysis of mAP@50 for YOLOv5 and YOLOv8 essentially illustrates the accuracy of the parking
area occupation detection task. It is best understood through the broader parameter of overall system accuracy.
This also makes it difficult to determine all nuances between systems. Nonetheless, there exist certain features
of analysis that still circumvent the definition of model understanding accuracy. YOLOv5 outperforms its
counterpart in overall objective achievement. In terms of recall, however, YOLOv8 significantly surpasses
YOLOv5. This means that both models serve a different purpose for parking management systems. While
YOLOv5 was built to maximise the potential of highly trusted environments, YOLOv8 focuses more on
delivering higher accuracy, making it reliable for dynamic and complex outdoor environments. In systems that
require precision and the lowest possible risk, YOLOv5 will yield more accurate claims with a lower chance of
risks. Against these measures, YOLOv8 showed itself best in its ability to allow flexible working and adapting
to different environments.
In the case of processing speed, YOLOv8 excels by 2.0 ms; just a tiny bit faster than YOLOv5, which
takes 1.7 ms for inference. This edge allows YOLOv8 to be more suitable for systems that demand real-time
processing, especially in scenarios where there is a need for a fast response to a change in parking status.
Normalised confusion matrix analysis further indicates that YOLOv8 has greater capability in minimising
classification errors as compared with YOLOv5. YOLOv5 models tend to misclassify background as a car more
frequently than YOLOv8, resulting in higher detection errors than YOLOv8.
The results of the comparison by class show that the additional class Cars has the highest gain in
performance using YOLOv8 while the classes Empty has the least. YOLOv8 increased precision in class Cars by
+ 8.14%, recall by 8.63%, and mAP by +9.39% from YOLOv5. This indicates that YOLOv8 is better at detecting
parked vehicles with fewer faults. Meanwhile, the improvements in class Empty are relatively lower, with
precision increasing by +1.02%, recall by +2.49%, and mAP by +2.2%. These differences indicate that YOLOv5

Detection Outdoor Car Parking Availability Using Deep Learning YOLOv8 8


Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3

Page 10 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 11 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN | Volume 5 No. 1 | June 2020: 1-3

is fairly effective in identifying empty parking spaces, while YOLOv8 makes a more substantial improvement
in detecting parked cars.

4. CONCLUSION

The approaches taken in the examination brought out the advantages and disadvantages of both
YOLOv5 and YOLOv8 concerning the accuracy of parking occupancy detection. YOLOv5 was highly accurate in
detecting occupied and unoccupied spaces with space usage, so there were very few false positives. On the
other hand, YOLO V8 stood out in recall and inference speed, being an optimal option for real-time applications
where prompt action is essential.
YOLOv8 performed remarkably better with the class detection of cars as opposed to empty and the
results are obviously better compared to YOLOv8. In comparison to YOLOv5, YOLO achieves a precision
increment of 8.14%, recall increment of 8.63% and a mAP boost of 9.39%. On the other hand, YOLOv5 and
YOLOv8 still perform considerably better on the empty class coming in at a moderate 2.2% mAP increase, while
the instances of precision and recall narrow YOLO achieved 1 percent and 2.49% respectively.
In this case, YOLOv5 excels in terms of reducing false positives as well as precision in detecting empty
parking areas. If the aim is to detect a larger area and more quickly while reducing false negatives, then YOLOv8
is more suitable. To achieve more ideal results in detecting parking spaces, enhancements to the system for
detecting the presence of parking spaces more effectively may be made in the future using a hybrid strategy
that combines the advantages of precision from YOLO V5 and recall from YOLO V8, employing ensemble
learning methods like weighted averaging or stacking to improve the overall accuracy of parking space
availability.

REFERENCES

[1] I. Adhibuana Priyatna and B. Darmawan, “Dielektrika-Jurnal Ilmiah Kajian Teori dan Aplikasi Teknik
Elektro Rancang Bangun Sistem Monitoring Parkir Mobil Indoor Dengan Wireless Sensor Network
Menggunakan Nodemcu Esp8266 Berbasis Internet Of Things ARTICLE INFO ABSTRACT,” Jurnal Ilmiah
Kajian Teori dan Aplikasi Teknik Elektro, vol. 11, no. 2, p. 89, Aug. 2024.
[2] K. Kumar, V. Singh, L. Raja, and S. N. Bhagirath, “A Review of Parking Slot Types and their Detection
Techniques for Smart Cities,” Smart Cities, vol. 6, no. 5, pp. 2639–2660, Oct. 2023, doi:
10.3390/smartcities6050119.
[3] T. Hidayat, R. F. Firmansyah, M. Ilham, M. N. Yazid, and P. Rosyani, “Analisis Kinerja Dan Peningkatan
Kecepatan Deteksi Kendaraan Dalam Sistem Pengawasan Video Dengan Metode YOLO,” JRIIN : Jurnal
Riset Informatika dan Inovasi, vol. 1, no. 2, 2023, [Online]. Available:
https://jurnalmahasiswa.com/index.php/jriin
[4] F. Xiao, H. Wang, Y. Li, Y. Cao, X. Lv, and G. Xu, “Object Detection and Recognition Techniques Based on
Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots:
An Overview and Review,” Mar. 01, 2023, MDPI. doi: 10.3390/agronomy13030639.
[5] O. G. Ajayi, J. Ashi, and B. Guda, “Performance evaluation of YOLO v5 model for automatic crop and weed
classification on UAV images,” Smart Agricultural Technology, vol. 5, Oct. 2023, doi:
10.1016/j.atech.2023.100231.
[6] I. Apeināns, M. Sondors, L. Litavniece, S. Kodors, I. Zarembo, and D. Feldmane, “Cherry Fruitlet Detection
using YOLOv5 or YOLOv8?,” in Vide. Tehnologija. Resursi - Environment, Technology, Resources, Rezekne
Higher Education Institution, 2024, pp. 29–33. doi: 10.17770/etr2024vol2.8013.
[7] G. Jocher, “YOLOv8: State-of-the-Art Object Detection Model,” Ultralytics Documentation.
[8] Y. Gao, W. Liu, H. C. Chui, and X. Chen, “Large Span Sizes and Irregular Shapes Target Detection Methods
Using Variable Convolution-Improved YOLOv8,” Sensors, vol. 24, no. 8, Apr. 2024, doi:
10.3390/s24082560.
[9] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-
art for real-time object detectors,” Jul. 2022, doi: 10.48550/arXiv.2207.02696.
[10] I. Apeināns, M. Sondors, L. Litavniece, S. Kodors, I. Zarembo, and D. Feldmane, “Cherry Fruitlet Detection
using YOLOv5 or YOLOv8,” Vide. Tehnologija. Resursi - Environment, Technology, Resources, vol. 2, pp.
29–33, 2024, doi: 10.17770/etr2024vol2.8013.
[11] G. Novandra Rizkatama, A. Nugroho, and dan Alfa Faridh Suni, “Edu Komputika Journal Sistem Cerdas
Penghitung Jumlah Mobil untuk Mengetahui Ketersediaan Lahan Parkir berbasis Python dan YOLO v4,”

Page 11 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395


Page 12 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

JOIN (Jurnal Online Informatika) p-ISSN: 2528-1682


e-ISSN: 2527-9165

Edu Komputika, vol. 8, no. 2, 2021, [Online]. Available:


http://journal.unnes.ac.id/sju/index.php/edukom
[12] D. A. Lestari, M. W. Sardjono, and M. Mujahidin, “Aplikasi Penghitung Kapasitas Ruang Parkir pada
Lahan Parkir Kosong Menggunakan Library OpenCV pada Bahasa Pemrograman Python,” 2023.
[13] E. Tanuwijaya and C. Fatichah, “Penandaan Otomatis Tempat Parkir Menggunakan YOLO untuk
Mendeteksi Ketersediaan Tempat Parkir Mobil pada Video CCTV,” BRILIANT: Jurnal Riset dan
Konseptual, vol. 5, no. 1, 2020, doi: 10.28926/briliant.
[14] T. R. Calista, N. W. A. Majid, and R. Andrian, “Implementasi Image Processing dan Histogram of Oriented
Gradient untuk Mendeteksi Slot Parkir Suatu Supermarket,” Jurnal Sistem dan Teknologi Informasi
(JustIN), vol. 11, no. 3, p. 453, Jul. 2023, doi: 10.26418/justin.v11i3.55412.
[15] D. Primasari, G. Ferdian R, Z. Aulia, U. Tussyifaa, and A. R. Wiranto, “SISTEM SMART TRAFFIC LIGHT
MENGGUNAKAN ALGORITMA YOLOv8,” Jurnal Teknologi Terapan) |, vol. 10, no. 1, 2024.
[16] C. Shorten, T. M. Khoshgoftaar, and B. Furht, “Text Data Augmentation for Deep Learning,” J Big Data,
vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00492-0.
[17] L. Rahma, H. Syaputra, A. H. Mirza, and S. D. Purnamasari, “Objek Deteksi Makanan Khas Palembang
Menggunakan Algoritma YOLO (You Only Look Once),” Jurnal Nasional Ilmu Komputer, vol. 2, no. 3, pp.
2746–1343, Aug. 2021, doi: 10.47747/jurnalnik.v2i3.534.
[18] J. Terven, D. M. Córdova-Esparza, and J. A. Romero-González, “A Comprehensive Review of YOLO
Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Dec. 01, 2023,
Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/make5040083.
[19] A. Prasetiadi, J. Saputra, I. Kresna, and I. Ramadhanti, “YOLOv5 and U-Net-based Character Detection
for Nusantara Script,” Jurnal Online Informatika, vol. 8, no. 2, pp. 232–241, Dec. 2023, doi:
10.15575/join.v8i2.1180.
[20] O. E. Olorunshola, M. E. Irhebhude, and A. E. Evwiekpaefe, “A Comparative Study of YOLOv5 and YOLOv7
Object Detection Algorithms 1*,” Journal of Computing and Social Informatics, vol. 2, no. 1, p. 1, 2023,
doi: 10.33736/jcsi.5070.2023.
[21] F. M. Talaat and H. ZainEldin, “An improved fire detection approach based on YOLO-v8 for smart cities,”
Neural Comput Appl, vol. 35, no. 28, pp. 20939–20954, Oct. 2023, doi: 10.1007/s00521-023-08809-1.
[22] L. Tan, T. Huangfu, L. Wu, and W. Chen, “Comparison of RetinaNet, SSD, and YOLO v3 for real-time pill
identification,” BMC Med Inform Decis Mak, vol. 21, no. 1, Dec. 2021, doi: 10.1186/s12911-021-01691-
8.

Detection Outdoor Car Parking Availability Using Deep Learning YOLOv8 10


Rafi Ardinata Riskiansyah1, Farah Zakiyah Rahmanti2, Yohanes Setiawan3

Page 12 of 12 - AI Writing Submission Submission ID trn:oid:::1:3166627395

You might also like