Proposal Latex Template BE
Proposal Latex Template BE
INSTITUTE OF ENGINEERING
PULCHOWK CAMPUS
A
PROJECT PROPOSAL
ON
ECO-DETECT
SUBMITTED BY:
AADARSHA THAPA MAGAR (PUL077BEI002)
SANGAM RAI (PUL077BEI040)
SUSAN THAPA (PUL077BEI046)
SUBMITTED TO:
DEPARTMENT OF ELECTRONICS & COMPUTER ENGINEERING
Poush, 2080
Acknowledgments
We extend our sincere honour and special thanks to Asst. Prof. Santosh Giri and Asst. Prof.
Bibha Sthapit, Project Management Team of the Department of Electronics and Computer
Engineering for their advice, continuous guidance and encouragements. We also express our
sincere thanks to all our lab instructors and seniors for their constant help and suggestions
most notably Assoc. Prof. Dr. Sanjeeb Prasad Pandey.
We sincerely thank Department of Electronics and Computer Engineering, Pulchowk Cam-
pus for giving us an opportunity to work on this project to expand our knowledge on Com-
puter Vision and work as a team. We would like to thanks all of our friends for helping and
supporting us to carry out the project and giving us the advice we needed when we were
confounded on our track.
Finally, we express our sincere gratitude to all those who directly and indirectly have helped
us built this project.
ii
Contents
Acknowledgements ii
Contents iv
List of Figures v
List of Abbreviations vi
Abstract vii
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Literature Review 3
2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Related Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Proposed Methodology 6
3.1 Data Collection and Processing . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Model Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Model Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Model Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Model Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
iii
6 Timeline 15
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
iv
List of Figures
2.1 Non-Max Suppression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Intersection Over Union(IoU) . . . . . . . . . . . . . . . . . . . . . . . . . . 5
v
List of Abbreviations
CNN Convolutional Neural Network
YOLO You Look Only Once
ML Machine Learning
CBS Central Bureau of Statistics
SWM Solid Waste Management
vi
Abstracts
Eco-detect , a simple project intended to expedite the process of segregating wastes based on
their nature of either degradability as an effecient process of segregating waste can promote
circular economy, diminish landfills,enhance recycling rate, and reduce waste in overall.
Waste management and systematic sorting of them are considered to be a significant role in
ecological development around the world. There is prior need to lessen waste by recycling
and reusing discarded materials that result in reducing environmental problems and lead to
circular economy. This project aims to create an automated waste detection system using
YOLO framework that will gather the waste images from a camera ,and categorize the waste
materials so that the waste can be properly sorted in bins.
vii
1. Introduction
1.1 Background
Solid waste management has long been a major environmental challenge globally and Nepal
has been facing huge issues in the solid waste management as well, especially in the context
of metropolitan cities . The metropolitans has been incapable in improving its solid waste
management system due to population bloom, urbanization and other factors. The waste
management done by Kathmandu Metropolitan city primarily uses land filling site, and as
recycling activities are very limited and informal which puts further stress on the landfill [1].
Although reduction, reuse, recycle, prevention are of high priority in waste management, the
traditional manual segregation of wastes prevails and recyle holds the lowest contribution
in SWM of many metropolitans [2].The manual method of segregating being slow, lags the
process of recycling.
With the aim in assisting the wastage classification Eco-detect has been proposed as our
project. It is basically an object detection program integrated with simple hardware and
YOLO as its core architecture.
Susan will continue on this
• currently the system of manual hand picking prevails which is prone to errors and
limited to a certain pace only
• statistically reuse and recyle has been done the least [1]
1.3 Objectives
• to develop an object detection system using YOLOv5 for accurate trash classification
1
• to optimize the model for real-time processing,
1.4 Scope
• urban surveillance to detect littering in real-time
• maintaining cleanliness around heritage sites, tourist destination, and public spaces
2
2. Literature Review
2.1 Related Work
Throughout the previous years, various works have been executed with the point of limiting
the effect of the incorrect disposal of waste. Many neural network and image classification
projects are being done previously.Some of the previous works performed on image process-
ing are listed below:
At the TechCrunch Disrupt Hackathon, ”Auto Trash” [3] has been made by a group which
is an automatic garbage bin that sorts trash dependent on the features of recycling and
composting. Their framework uses a raspberry pi camera and has a pivoting top. The group
utilized the engine of Google’s TensorFlow AI and constructed their layer on top of it for
object detection.
D.Vinodha et al. recommended utilizing IoT for waste separation in paper[4]. The major
goal was to create a Raspberry-Pi equipped smart bin that was integrated with sensors such
as ultrasonic sensors and a Pi camera for image processing using the YOLO technique. The
proposed idea is really strong, however outfitting each bin with a Raspberry Pi, sensors,
motors, and a camera raised the cost of the bin making the project not affordable in all
levels.
In paper [5], Md. Wahidur Rahman, et al. have proposed a model which is considerably
divided into two parts. Architectural layout with a lot of waste using Raspberry Pi with
camera module and machine learning. Another is an IoT smart trash box with a dreary
makeup microcontroller with multiple sensors for real-time waste disposal. This paper rep-
resents the data calculation methodology of proposed CNN model, ultrasonic sensor and
load measurement sensor. Also, this article also presents several experimental data analyses
to provide the effectiveness of the proposed method.
3
algorithms have made them significantly better than most of the two-stage object detectors.
Furthermore, with the introduction of YOLOs, various applications have utilized them for
object detection and recognition in various contexts. YOLOs have performed exceptionally
well in comparison to their two-stage detector counterparts.
Deep Learning (DL) emerged in the early 2000s, following the popularity of Support Vec-
tor Machines (SVM), Multilayer Perceptron (MLP), Artificial Neural Networks (ANN), and
other similar neural networks. Researchers often classify DL as a subset of Machine Learning
(ML), which is itself a subset of Artificial Intelligence (AI).
Authors of YOLO [5] have reframed the problem of object detection as a regression prob-
lem instead of classification problem. A convolutional neural network predicts the bounding
boxes as well as class probabilities for all the objects depicted in an image. As this algorithm
identifies the objects and their positioning with the help of bounding boxes by looking at
the image only once, hence they have named it as You Only Look Once (YOLO).
-pc : the probability of containing an object in the grid by the underlying bounding box.
-bx , by : the center of the predicted bounding box.
-bw , bh : the predicted dimensions of the bounding box.
-p(c1 ), p(c2 ), ..., p(cn ) : the conditional class probabilities that the object belongs to each class
for the given pc , where n is the number of classes/categories.
A grid cell predicts (B × 5 + n) values, where B is the number of bounding boxes per
grid cell. As we have divided the image into S × S grid cells, the output tensor shape would
be S × S × (B × 5 + n).
4
Each bounding box in a grid is assigned a confidence score (cs ) by multiplying the proba-
bility (pc ) with the Intersection over Union (IoU) between the ground-truth and predicted
bounding box. If there is no object in the grid cell, the confidence score will be zero. Next,
we calculate a class-specific score (css ) for each bounding box in all the grid cells. This score
reflects the likelihood of the class appearing in that box and how accurately the predicted
box fits the object.
Once the bounding boxes have been filtered out using a certain threshold, we are left with a
smaller number of boxes, although this number might still be quite high. To further refine
the selection, we use a process called non-maximum suppression, which relies on the concept
of Intersection over Union (IoU). The effect of non-maximum suppression can be seen in
Figure 2.1.
IoU is a measure that can be used to compare two boxes, as shown in Figure 2.2 . To apply
non-maximum suppression, we start by selecting the box with the highest class score. Any
other bounding boxes that overlap with this box and have an IoU greater than a predefined
threshold are discarded. We repeat this process until no bounding boxes remain that have
lower confidence scores than the chosen box.
5
3. Proposed Methodology
For the system we have planned 5 stages for the development: data collection, model devel-
opment, model training, model testing,and integration. Apart from this approach various
dependencies, libraries and tools will be utilized for the project.. The most important nec-
essary libraries used in this project can be NumPy, Keras, TensorFlow, Utils, matplotlib,
pandas, seaborn, flask and OpenCV.
6
To facilitate seamless collaboration and model validation, the annotated images will be
stored in a dedicated folder on Google Drive. A shareable link will be generated, and
permissions will be judiciously assigned to allow access to collaborators.
7
detect waste objects in unseen data. The evaluation metrics, including precision, recall, and
mean Average Precision (mAP), provide a comprehensive measure of the model’s effective-
ness.
Once the model has demonstrated satisfactory performance on the validation set, its
trained weights are exported. This facilitates future inference scenarios, where the model
can be deployed to detect waste objects in real-world images or videos. The exported weights
serve as a compact representation of the learned knowledge, enabling efficient and effective
waste detection in diverse environments.
The development of the waste detection model using YOLOv5 will involve several critical
steps. Upon establishing the development environment and preparing the custom dataset,
the model will be configured to meet the specific requirements of waste detection. The
training strategy will be formulated, considering parameters such as image size, batch size,
and the custom model configuration.
The training process will entail iteratively exposing the model to the training dataset,
enabling it to learn the identification and localization of waste objects. The model will
refine its parameters through backpropagation, adjusting internal weights to minimize the
disparity between predicted and ground truth bounding boxes.
TensorBoard will be employed during training to visualize and monitor the model’s per-
formance. This will allow for the analysis of metrics such as loss, precision, and recall,
providing insights into the model’s learning progress. The training phase will be conducted
for a predefined number of epochs, allowing the model to converge to an optimal state.
Following training, the model’s performance will be evaluated on a separate validation
dataset. This step will assess the model’s generalization capabilities and ensure its ability
to accurately detect waste objects in unseen data. Evaluation metrics, including precision,
recall, and mean Average Precision (mAP), will provide a comprehensive measure of the
model’s effectiveness.
Once the model has demonstrated satisfactory performance on the validation set, its
trained weights will be exported. This will facilitate future inference scenarios, where the
model can be deployed to detect waste objects in real-world images or videos. The exported
weights will serve as a compact representation of the learned knowledge, enabling efficient
and effective waste detection in diverse environments.
8
Figure 3.1: Sequence diagram for Model Training
9
and changing their angular position.
10
4. Proposed Experimental Setup
4.1 Dataset Partition
We will divide the dataset into two categories: training, and test sets—in order to guarantee
appropriate model assessment and validation. The partitioning procedure will take the
following factors into account:
• Balanced Distribution: We’ll make sure that the distribution of difficulty levels is
balanced across the training, and test subsets. This will help encompass the entire
spectrum of ideas that the model need to learn.
• Random Assignment: Each subgroup will get a random assignment of samples. This
random assignment will aid in the prevention of bias and overfitting of the model.
11
folder. A google drive folder to store the image database and model weight. Also, fetch it
conviently. and a text editor or integrated development environment (IDE) for writing and
running code.
12
5. Proposed System design
13
Figure 5.2: Interaction Model for Eco-detect
14
6. Timeline
15
References
[1] Mani Nepal, Apsara Karki Nepal, Madan S Khadayat, Rajesh K Rai, Priya Shyamsundar,
and E Somanathan. Low-cost strategies to improve municipal solid waste management
in developing countries: experimental evidence from nepal. Environmental and Resource
Economics, 84(3):729–752, 2023.
[3] Jay Donovan. Auto-trash sorts garbage automatically at the techcrunch disrupt
hackathon. Techcrunch Disrupt Hackaton, San Francisco, CA, USA, Tech. Rep. Dis-
rupt SF, 2016, 2016.
[4] D. Vinodha, J. Sangeetha, B. Cynthia Sherin, and M. Renukadevi. Smart garbage system
with garbage separation using object detection. 2020.
[5] Md Wahidur Rahman, Rahabul Islam, Arafat Hasan, Nasima Islam Bithi, Md Mahmodul
Hasan, and Mohammad Motiur Rahman. Intelligent waste management system using
deep learning with iot. Journal of King Saud University-Computer and Information
Sciences, 34(5):2072–2087, 2022.
16