Thesis Draft 08 Final Revision
Thesis Draft 08 Final Revision
ALGORITHM
An
Undergraduate Thesis
Presented to the Faculty of Electronics Engineering Department
College of Engineering and Architecture
University of Science and Technology of Southern Philippines
May 2023
APPROVAL SHEET
PANEL OF EXAMINERS
Approved and accepted in partial fulfillment of the requirements for the degree
Approved:
ii
ABSTRACT
iii
ACKNOWLEDGEMENT
The researcher would like to extend her deepest gratitude and appreciation to
the people who, in one way or another, contributed invaluable assistance and
encouragement to the completion of this study.
To her family, who have been with her to give emotional and financial support
during this research.
To her friends and fellow batchmates for their support, encouragement, and
valuable prayers.
To Marichu Uy and Gee Ann Secadron for their effort in helping the
researcher find Robusta coffee farms in Bukidnon that will provide coffee cherry
samples for the study.
To her thesis advisers, Engr. Kristine Mae P. Dunque and Engr. Dominic O.
Cagadas, for their invaluable guidance, support, and expertise throughout the entire
research process. Their continuous encouragement and constructive feedback have
been instrumental in shaping the direction and quality of this study.
Above all, to the ALMIGHTY GOD, for giving her wisdom, and guidance to
make this study successful.
iv
TABLE OF CONTENTS
Preliminaries Page
TITLE PAGE i
APPROVAL SHEET ii
ABSTRACT iii
ACKNOWLEDGEMENT iv
TABLE OF CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES xi
CHAPTER 1 INTRODUCTION 1
1.1 Background of the Study 1
1.2 Statement of the Problem 5
1.3 Objectives of the Study 5
1.4 Conceptual Framework 6
1.5 Significance of the Study 7
1.6 Scope and Limitations 8
1.7 Definition of Terms 10
v
Recognition Technology for Bird
Identification System
2.5 Image Processing 26
2.5.1 Performance Study of YOLOv5 26
and Faster R-CNN for Autonomous
Navigation around Non-Cooperative
Targets
2.5.2 Face Mask Recognition System 28
with YOLOV5 Based on Image
Recognition
2.5.3 Real Time Object Detection and 31
Tracking Using Deep Learning and
OpenCV
CHAPTER 3 METHODOLOGY 45
3.1 Research Design 45
3.2 Research Variables 45
3.3 Research Setting 46
3.4 Research Procedure 46
3.4.1 Define the Problem, Specific 47
Aims and Objectives
vi
3.4.2 Literature Review and 47
Conceptualization
3.4.3 System Designing 47
3.4.4 Software Installation and 59
Programming
3.4.5 Testing 56
3.4.6 Data Collection, Analysis and 57
Evaluation
REFERENCES 84
APPENDIX - A
APPENDIX - B
CURRICULUM VITAE
vii
LIST OF FIGURES
Figure
Figure Name Page
No.
viii
24 Lighting and image acquisition platform mounted onboard the 38
coffee harvester
25 (1) Precision Equation (2) Recall Equation (3) F1-Score Equation 39
(4) Average Precision Equation (5) Mean Average Precision
Equation
26 Two arbitrary video frames collected during the coffee harvest: 40
(a-c) original frames with fruits at different stages of maturation,
(b-d) detection performed by the proposed model
27 Research Stages 41
28 Arabica Fruit Images (a) unripe (b) semi ripe (c) Perfectly Ripe 42
(d) overripe
29 Research Procedure 46
30 System Architecture of Coffee Cherry Mobile Application 49
31 Swimlane Diagram of the Mobile Application 51
32 Model Architecture of Single-Stage Detector (YOLO) 53
33 Android Studio Logo 54
34 Anaconda Logo 55
35 Jupyter Notebook Logo 55
36 TensorFlow Logo 56
37 Experimental Setup 56
38 Implementation 57
39 Individual Placement and Detection of Robusta Coffee Cherries 58
Using the Proposed Mobile Application
40 True Positive 59
41 False Negative 59
42 False Positive 59
43 True Negative 59
44 Working Robusta Coffee Cherry Ripeness Classification Mobile 63
Application
45 Experimental Procedure and Data Gathering 64
46 Performance of Training Phase 65
ix
47 Precision-Confidence Curve 66
48 Recall-Confidence Curve 67
49 Precision-Recall Area Under Curve (AUC) 68
50 Loss and mAP results after training the YOLOv5 model for 10 69
epochs
51 F1-Confidence Curve 72
52 Evaluation Metrics Box Plot for Each Class 77
53 Line Graph of Trials 1, 2 and 3 Percentage Scores 79
x
LIST OF TABLES
Table
Table Name Page
No.
xi
CHAPTER 1
INTRODUCTION
Coffee is a staple beverage for the Filipinos. In 2018, Ernie Masceron, the
Senior Vice President and Head of Corporate Affairs of Nestlé Philippines, claimed
that Filipinos are coffee lovers. Over 21 million cups of coffee are consumed daily in
the Philippines (Masceron, 2018). The Filipinos consume as much coffee products as
nations like the United States, Japan, Brazil and the European Union making it the 5th
biggest coffee consumer worldwide (Department of Trade and Industry, 2018). In fact,
coffee consumption in the Philippines has increased from 2009 to 2019 by 6.82% (ICO,
2020).
Figure 1
The Increasing Coffee Consumption in the Philippines from 2014 to 2020
3.3 3.25
PHILIPPINES 60KG BAGS
3.18
3.2
3.1 3 3.03
3
2.9 2.8
2.8
2.7
2.6
2.5
2014/15 2015/16 2016/17 2017/18 2018/19 2019/20 2020/21
YEAR
1
However, Filipino coffee farmers cannot keep up with the rising demands of
coffee products in the country. This leads to importing at a steady volume of 100-
135,000 metric tons that costs the country about P7 Billion (Philippine Coffee Board,
Inc., 2017). As shown in Figure 2, the coffee production in the Philippines decreased
from an average of 127,412 metric tons in 1995 to only 60,044 metric tons of coffee
products in the year 2019.
Figure 2
Coffee Production in the Philippines from mid-1990s to 2019
120,000
100,000
Metric Tons
80,000
60,000
40,000
20,000
0
2007
2010
1995
2000
2001
2002
2003
2004
2005
2006
2008
2009
2011
2012
2013
2014
2015
2016
2017
2018
2019
Year
2
Table 1
The Coffee Industry in the Philippines in 2020
PRODUCTION CONSUMPTION
The unfortunate reality of the Philippine Coffee Industry is concerning since the
country actually belongs to the Coffee Belt (Figure 3), an imaginary band around the
equator where most, if not all, major coffee producing countries like Brazil, Vietnam
and Indonesia falls into (Department of Trade and Industry, 2018). Despite the fact that
the Philippines’ geographical location and features, climate, and soil condition makes
it perfect for plantation of all four varieties of commercially-viable coffee: Arabica,
Liberica, Excelsa, and Robusta, Filipino farmers are still hesitant to invest in coffee
industry because of its time-consuming and labor-intensive harvesting and processing
(CNN Philippines, 2018). According to an article from the Philippines Coffee Board,
Inc. picking only red ripe coffee cherries is a critical factor in producing high quality
coffee beans. Picking immature cherries results to more acidic and bitter flavor profile.
While picking overripe coffee cherries can lead to off-flavors, mold, and other defects
in the coffee (Torch Coffee Company, 2016). Both instances can lead to poor coffee
quality and lower yield.
3
assistive technologies for the local coffee farmers. With this, the challenge now is to
make the Philippines a self-sufficient coffee producing country. With the help of
technological advancements, research and innovations, it will hopefully be able to assist
local coffee farmers practice optimal coffee farming and processing techniques making
it more economic and sustainable.
Figure 3
The Coffee Belt Map (melacoffee.com)
4
1.2 Statement of the Problem
1. Local coffee farmers in Bukidnon still do not have access to any technology
that can classifying Robusta coffee cherries according to its maturity levels
without the need for internet connectivity.
This study aims to develop a mobile application capable of object detection and
classification that can accurately classify coffee cherries according to its maturity levels
during the harvest season. Moreover, the researcher aims to:
5
1.4 Conceptual Framework
Figure 4
Conceptual Framework
This study focuses on the analysis of coffee cherry images and the identification
of their ripeness levels: unripe, semi-ripe, ripe, and overripe. It employs the Robusta
Coffee Cherry Ripeness Classification mobile application, installed on an Android
phone, which utilizes the mobile phone’s camera that captures the analog signals (light
detected) through the lens’ photosensitive sensors and convert it into digital images.
Through the integration of the You Only Look Once (YOLO) version 5 object detection
and classification algorithm, the system is trained to identify Robusta coffee cherries
and classify them based on their maturity levels.
6
Moreover, the Robusta Coffee Cherry Ripeness Classification mobile
application is capable of detecting and classifying Robusta coffee cherry maturity level
by applying image processing principles to the input image, passing it through a
convolutional layer that extract meaningful features by utilizing filters and signal
processing techniques. Accordingly, the process also involves predicting the
probability that an object is present in a cell and then calculating the objectness score,
predicting the coordinates of a bounding box around an object depending on the
objectness score, and lastly, predicting the classification of the object within the
bounding box with the use of YOLOv5 algorithm.
The goal of this study is to address the challenges and concerns that are
prevalent in the field of agriculture — specifically, the coffee farming industry. Firstly,
the Philippines has significant potential to become a major player in the world of coffee
production and export. As previously stated, the Philippines’ geographical location in
the planet, climate, soil condition and geographical features makes it ideal for coffee
farming. Second, the country is considered as the 5th biggest coffee consumer
worldwide and yet our local coffee farmers cannot keep up with the rising demands of
coffee products. Lastly, the Philippines was once the top coffee exporters worldwide
before the coffee rust disease infected majority of the coffee farms in the country. These
only serve to highlight how promising the Philippine coffee industry may be when
given the necessary support and attention. As such, the significance of the study will
be premised on the following:
Department of Agriculture. The findings of this study can benefit the Department of
Agriculture, specifically the coffee farming industry since it provides technological
advancement and aid in the coffee farming industry.
Filipino farmers. The results of this study will aid the Filipino coffee farmers in
systematizing the coffee cherry classification and sortation process. Furthermore, this
solution serves as an assistive technology for new coffee farmers without experience,
7
and in-depth knowledge and skills in classifying coffee cherries based on its ripeness
level.
ECE Profession. This study can help ECE professionals in developing prototypes that
will automatically sort and harvest coffee cherries. Electronics Engineering students, as
well as professionals, can utilize the model for their systems and prototypes.
Future researchers. The findings and recommendations of the study will serve as a
reference for researchers who will pursue this kind of research.
8
7. The ripeness classification of coffee cherries is based on the samples of
unripe, semi-ripe, ripe and overripe coffee cherries from the local farm in
Pigtauranan, Pangantucan, Bukidnon.
8. The total number of datasets is 2,000 images. This includes 1,200 images
for training, 400 images for validation, and 400 images for testing.
9. This study covers only one major development phase – the software
development which involves the design and implementation of the Android
application.
10. The study has only two test experimental setups: (1) the individual
placement of Robusta coffee cherries within the camera frame to determine
the number of TP, TN, FP, and FN, and (2) placement of several coffee
cherries with mixed ripeness level into one frame to determine the changes
in its performance in detecting coffee cherries during three different trials.
11. The training phase is carried out with the use of NVIDIA GeForce MX330
GPU with 2gb dedicated memory.
9
1.7 Definition of Terms
Batch Size. Batch size refers to the number of samples that are processed at once by
the model during one forward/backward pass of training. It is used to calculate the
gradients from which the model adjusts its weights throughout each iteration of training.
Coffee Beans. It is the seeds of a fruit are what we use to roast, grind, and brew to
make coffee. The beans are the seeds found inside the coffee cherries, which are
produced by the coffee plant.
Coffee Cherry. Coffee cherry is also known as the coffee fruit. It is a small, grape-size,
round stone fruit that grows in bunches on the coffee plant. Its color changes from green
to deep red as it ripens.
10
Confusion Matrix. A confusion matrix is a table that is used to evaluate the
performance of a classification model by comparing the predicted class labels with the
true class labels of a set of test data.
Convolutional Neural Network. A CNN is a Deep Learning algorithm that can take
in an input image, assign importance (learnable weights and biases) to various
aspects/objects in the image, and be able to discern one from the other.
Feature Maps. Feature maps are generated by applying Filters or Feature detectors to
the input image or the feature map output of the prior layers. The internal
representations for particular input for each of the Convolutional layers in the model
will be revealed through feature map visualization.
Deep Learning. Deep learning, which is simply a neural network with three or more
layers, is a subset of machine learning. These neural networks make an effort to mimic
how the human brain functions, however they fall far short of being able to match it,
enabling it to "learn" from vast volumes of data.
Epochs. Epoch represents a full iteration over the entire dataset, where the model is
trained on each sample once and the weights of the model are updated based on the
gradients computed from the entire dataset.
Frame Size. refers to the physical dimensions of the image or video frame captured by
the camera, and it is determined by the resolution of the camera sensor and the aspect
ratio of the image or video.
11
GPU. Also known as the Graphics Processing Unit, can significantly speed up the
training process for deep learning models, allowing researchers and data scientists to
iterate more quickly and experiment with larger models and datasets.
Inference Time. It refers to the amount of time it takes for the object detection and
classification model to analyze an image or video frame and identify the objects present
in the scene.
Maturity Level. The maturity level, also known as the ripeness, is the state of coffee
cherry being fully grown and ready to be harvested. Ideally, coffee cherries are
harvested at the peak of its ripeness.
OpenCV. An open-source library for computer vision, machine learning and image
processing that can be used for real-time object detection applications.
Testing Set. It comprises of 20% of the total number of dataset and is used to evaluate
the final performance of the model on unseen data.
12
Threads. Threads in an object detection mobile application can help improve the speed,
efficiency, and accuracy of the detection and tracking process.
Training Set. The training set is the portion of the data used to train the machine
learning model and it comprises of 60% of the total number of dataset.
Validation Set. It comprises of 20% of the total number of dataset and is used to tune
the hyperparameters and prevent overfitting.
Weights. weights are the numerical values assigned to the connections between nodes
in a neural network, and they are adjusted during the training process to optimize the
network's performance on a given task.
YOLOv5. Also known as You Only Look Once version 5, is a state-of-the-art real-time
object detection model developed by Ultralytics. It is based on a deep neural network
architecture that uses a backbone of convolutional layers to extract features from the
input image, followed by several layers that predict the object class and location.
13
CHAPTER 2
REVIEW OF RELATED LITERATURE
According to the Philippine Coffee Board Inc. in 2020, the history of Philippine
coffee is as rich as its flavor. In Lipa, Batangas in 1740, a Spanish Franciscan monk
planted the first coffee tree. From there, the cultivation of coffee expanded to other
regions of Batangas, resulting in a long-term increase in the province's wealth. Lipa
eventually hailed as the country's coffee capital.
14
producer, the southernmost region of the Philippines has emerged as one of the top
destinations for high-quality coffee. With 64% of the nation's total land area and 69%
of all fruit-bearing trees, the southern island of Mindanao is where most of the nation's
production is concentrated (PCBI, 2022).
Table 2
Four Commercially-Viable Coffee Varieties the Grows in the Philippines
4 VARIETIES OF COFFEE BEANS
15
Figure 5
Philippine Coffee Production Ratio in 2019
7%
1%
23%
69%
Figure 5 shows the production ratio of coffee products in the Philippines in the
year 2019. According to the Department of Agriculture, Robusta accounts for 69% of
total production in 2019. Arabica contributed around 23% of the production and is
mostly utilized for brewing and blending. The contributions from Excelsa and Liberica
(Kapeng Barako) to the overall production were 7% and 1%, respectively.
16
There are several factors to be considered to produce good quality coffee.
Factors, including the cultivar, growing altitude, climate, soil chemistry and conditions
during harvest and processing, drying method, temperature, humidity, roasting
conditions, grind size, and brewing method, affect the quality of the coffee that
consumers drink (Mermelstein, 2012). Moreover, environmental factors such as soil,
altitude, wind, precipitation or topography play a vital role in determining coffee plant
health. On the other hand, resilience, coffee fruit maturation, and caffeine production
combined to provide a distinctive flavor fusion (Nationwide Coffee, 2021).
Lastly, the best soil nutrition, shading, watering, and genetics which
accounted for the agronomic factors of producing coffee will not result in a high-quality
cup of brew without the optimal harvesting, processing, storage, and brewing methods
(Howard, 2011). Only freshly harvested and fully ripe berries should be utilized in any
of the three primary processing methods, according to Van Der Vossen in 2009. These
methods include dry, semi-dry, and washed processing. Utilizing unripe coffee beans
typically result in astringent, bitter, and "off"-tasting coffee. Additionally, delays in
depulping the harvested coffee cherries and prolonged fermentation often cause to
onion- flavor or unpleasant smells (Howard, 2011).
A coffee tree starts to produce fruit in shrubs along its branches when it reaches
maturity, which can take anywhere between 4 to 7 years. The fruit, sometimes known
as the cherry, starts out green and turns red when it's time for harvesting (Coffee
Masters, 2022). The coffee harvest season in the Philippines typically lasts from
November to March. The southern region of the country, Mindanao, finishes first in
their harvesting and then going upwards the archipelago resulting to Cavite finishing
last way into February or March (Philstar Global, 2012).
17
production process has seven basic sub procedures namely: Planting, Harvesting,
Cherry Processing, Coffee Milling, Roasting, Grinding and lastly, Packaging as shown
in Figure 6.
Figure 6
The Coffee Production Process
Cherry
Planting Harvesting
Processing
Milling Roasting Grinding Packaging
PLANTING
The coffee production process starts by planting the unprocessed coffee beans
in a large shaded beds for it to germinate and grow into coffee plants. To ensure that
the soil will remain moist until the roots become firmly established, planting is best
done during the rainy season.
HARVESTING
The second procedure is coffee cherry harvesting. Newly planted coffee bushes
typically take 3–4 years to begin bearing fruit, depending on its variety. Coffee cherries
can be hand-picked by people to ensure that only the ripe cherries are picked or they
can practice strip-picking where the cherries are stripped off of the branch which is not
recommended. Hand-picking coffee cherry requires thorough inspection for maturity,
which is a difficult and labor-intensive operation that, naturally, involves paid labor.
Coffee cherries mature at various times, and it can take up to three pickings to clean out
an entire farm. Picking ripe coffee cherries is very crucial in determining the quality of
the coffee. Immature cherry harvesting in coffee production is associated with caustic,
monotonous, or astringent sensory qualities (Perez, 2023). When cherries are picked
when they are ripe, they make a cup of coffee with a clean floral tea-like clarity, similar
to jasmine or hibiscus, with a slight sweetness, similar to raw honey or sugar cane juice.
18
Furthermore, research have indicated that beans from green-cane maturity stage coffee
berries have lower dry matter and yield than completely ripe coffee cherry fruits (Dalvi,
2017; Yusibani, 2022).
CHERRY PROCESSING
In this procedure, the farmers can either use two different methods: Wet Method
and the Dry Method. According to Nicky Matti, co-chairman of the coffee advocacy
group, wet process is highly encouraged for coffee farmers because it allows the coffee
to undergo fermentation that ensures flavor enhancement.
MILLING
The fourth procedure is the coffee milling process where dried coffee beans are
hulled which involves removing the parchment or dried husk that envelopes the coffee
bean. Coffee grading is also under the coffee milling process.
ROASTING
The next procedure is the roasting process. Green coffee beans are unroasted
coffee beans that have all the flavors preserved in them. The goal of roasting is to turn
green coffee beans into the flavorful brown beans you can buy in your favorite stores.
GRINDING
Next is the coffee grinding which is the primary goal is to produce the most
flavor in a cup of coffee. The manner of grinding affects how quickly the flavors of the
coffee can be released.
PACKAGING
Lastly, packaging for coffee is crucial since any exposure to air could cause the
coffee to lump. This is specifically true for ground coffee, which if exposed to air
quickly loses its flavor.
19
These eight basic procedures for coffee production by Rudy Caretti implies that
cultivating and processing high-quality coffee beans is not an easy task. It requires a lot
of time, effort, expertise, and techniques to ensure its quality.
According to an article from Perfect Daily Grind Ltd. in 2016, they have
confirmed that creating an excellent cup of coffee involves careful attention to every
step of the process, and enhancing each of those processes is necessary to make a finer
cup. Additionally, according to K.C. O'Keefe's Quality Formula (2007), 35% of the
coffee's quality is determined by the maturity of the cherries harvested. Therefore, the
harvesting process is considered to be the most important variable among the other
procedures in coffee production.
Figure 7
Determining the optimal ripeness for harvesting red coffee cherry by color
The article also mentioned that the most common way to judge ripeness is
through the color of the cherries shown in Figure 7. In fact, farmers from Cafe Pacas
conducted a study where they compared the color of the 1000-day lots to their cupping
scores and they found a strong correlation. Notice how cupping scores and ripeness,
considering only the color, of coffee cherries correlates in Figure 8. The experiment
they conducted proves the O´Keefe´s Quality Formula.
20
Figure 8
The relationship between cupping scores and ripeness level based on color
According to Thabet, Mahmoudi, et al. in 2014, over the past ten years, image
processing technology has advanced significantly. A large research community focused
on recently developing contexts such as augmented reality, visual search, object
recognition, and other areas is interested in its implementation on low-power mobile
devices. However, some image processing methods cannot be used efficiently in real-
time mobile applications due to their high computational complexity, extensive
processing times, and large battery consumption. Because these devices often have
limitations on their power supply, battery life, energy consumption, processing
capability, and RAM size, implementing these algorithms on them remains challenging.
The study also discussed about object recognition in mobile applications. Object
detection is a fundamental task of identifying the presence and location of multiple
classes of objects within an image (TensorFlow, 2022). In previous researches, face
detection and object identification on mobile platforms are becoming more and more
21
prominent. The majority of these systems make use of the OpenCV package (Thabet,
Mahmoudi, et al., 2014).
Lastly, the study concluded that since 2010, it appears that the capacity of
mobile platforms for image processing has reached a tipping point largely because a
number of devices could benefit from outsourcing the processing to the GPUs.
Moreover, the authors mentioned that to achieve a high performance, it is important to
carefully consider the following factors in order to make the most of the limited
computation resources on mobile processors: (1) explore algorithmic parallelism; (2)
study computation complexity; (3) divide tasks between the CPU and the GPUs; (4)
minimize the overhead of CPU-GPUs memory transfers; (5) avoid needless access to
the memory; and (6) achieve even more optimizations in terms of execution time and
energy.
The proposed system is designed and optimized by integrating the best of its
kind from cutting-edge machine learning platforms such as OpenCV, TensorFlow Lite,
and Qualcomm Snapdragon. These machine learning platforms is utilized to overcome
the issues relating to high performance requirements in implementing CNN Models in
Android phones (Martinez‑Alpiste, et al., 2021). Also, these platforms are designed to
conduct the inference task on mobile devices, making them appropriate for use in
22
environments with poor or no connectivity (Xu, et al., 2019). The design of their
proposed system is shown in Figure 9.
Figure 9
Design of the proposed system. Recognition Thread includes pre-processing (OpenCV)
and recognition (Snapdragon). Tracking thread is led by TensorFlow platform
The study concluded that their object recognition system was employed with
high accuracy and effectiveness. According to the experiment's findings, a 33.8 MB
model could be rendered at a pace of 17.7 frames per second while maintaining an
accuracy of 33.1 millimeters per second. Furthermore, the results of the study suggested
that the system could be further enhanced to achieve per-frame real-time object
recognition on mobile devices.
23
YOLO and Custom Vision to identify the birds. Accordingly, the main purpose of this
study is to identify birds through the YOLO and Custom Vision algorithms in a mobile
application.
For the development of mobile application, the study used Android Studio. The
application is used as an operation interface for packaging the entire system architecture.
The depth model identification system returns the identification result to the application
so the user may find out the precise bird name after the user uploads the photo through
the mobile app and stores it in the database. The overview of their system is shown in
Figure 10.
Figure 10
YOLO and Mobile App Architecture
For the implementation and identification results, the study obtained 70%
accuracy from the YOLO algorithm and 68% accuracy for the Custom Vision
Algorithm. Figure 11 shows the study’s identification results displayed on the mobile
application.
24
Figure 11
Identification Results
25
2.5 Image Processing
This paper explores the way the relative navigation task can be accomplished
by using cameras and machine learning algorithms. Using experimental data gathered
from simulations of formation flight conducted in the ORION Lab at Florida Institute
of Technology, two deep learning-based object recognition algorithms, Faster Region-
based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5),
are evaluated for performance. In order to compare the effectiveness of the object
detectors, the mean average precision metrics are included in the data analysis.
This study has four test cases shown in Table 3 in order to determine whether
lighting conditions play significant role in the performance of the algorithm to detect
and classify the target objects.
26
Table 3
Test Cases
Table 4
Summary of Experimental Results
27
The analysis of the test case findings reveals that the lighting environment has
a significant impact on the effectiveness of the detecting process. When [email protected] and
[email protected]:0.95 are compared, YOLOv5 outperforms Faster R-CNN in Case 1 with dim
lighting and a light source positioned at 90º. This discrepancy in performance indicates
that the training dataset has a gap with intense lighting, which may be improved by
implementing lighting augmentations or expanding the training dataset to collect more
source data with intense lighting. This will teach the algorithms to detect objects based
on context when intense lighting obscures them.
Moreover, time is a critical component for this research. Lesser frame rate
means lower inference rate. The RSO's position and orientation would not be updated
as fast as possible by the satellite. Ineffective missions would result from the chaser
missing or, worse still, colliding with the target. Thus, YOLOv5 is preferred over Faster
R-CNN because of its speed.
This study proposes replacing manual inspection with a deep learning method
and employing YOLOV5, the most effective objection detection algorithm currently
available at the time, to better apply it in the real world, specifically in the supervision
of people wearing masks in public spaces.
Figure 12 depicts the system provided in the study. To begin, everyone entering
the mall will take pictures with the camera, which will then be transferred to the
interface for face mask identification. If the face recognized within two seconds is a
28
mask-wearing face, the mall gate will be opened and shown to allow passage; otherwise,
it will be returned to face mask recognition until success is obtained.
Figure 12
Working System
The system consists of four parts: facial mask image enhancement, facial mask
image segmentation, facial mask image recognition, and interface interaction. Facial
mask image enhancement is used to improve the resolution of the mask worn for easy
detection. Face mask image segmentation is used to extract mask information. The
facial mask recognition part classifies the extracted mask information. The final
interface output can make the gate open smoothly and help customers to enter. The
recognition model is shown in Figure 13.
Figure 13
Recognition Model
29
The datasets utilized in this paper are from the AIZOOTe team's
FaceMaskDetection (https://github.com/AIZOOTech/FaceMaskDetection). The
datasets include photos that can recognise faces and indicate whether a mask is worn,
as well as 7,959 facial mask annotation data that is open source. In the dataset, the
authors of this study choose 92% for training and 8% for testing. In the classification,
"0" denotes "mask" while "1" denotes "no mask."
After testing, it is observed in the study’s experiment that it has a success rate of
roughly 97.9%. With that, the study also utilized some other classic machine learning
models for comparison. The results are shown in Figure 14.
Figure 14
Accuracy Comparison
30
2.5.3 Real Time Object Detection and Tracking Using Deep Learning and
OpenCV
Since the past few years, deep learning has had a significant impact on how the
world is adapting to artificial intelligence. Object detection algorithms such as Region-
based Convolutional Neural Networks (RCNN), FasterRCNN, Single Shot Detector
(SSD) and You Only Look Once (YOLO) are known to be the most popular algorithms.
When speed is prioritized over accuracy, YOLO outperforms the others, while
Faster-RCNN and SSD offers greater accuracy. In order to implement detection and
tracking efficiently, deep learning blends SSD and Mobile Nets. This method detects
objects efficiently without sacrificing performance (Chandan, et al., 2018).
Figure 15
Basic Block Diagram of Detection and Tracking
The fundamental block diagram for detection and tracking is shown in Figure
15. In this paper, detection and tracking algorithms based on MobileNets and SSD are
implemented in a Python environment. Object detection includes identifying an object's
region of interest within a class of images. Different methods include frame
differencing, optical flow, and background subtraction. Moreover, this paper mentioned
31
that YOLO based algorithm with GMM model by using the concepts of deep learning
will give good accuracy for feature extraction and classification.
For the object detection methods in this paper, they implemented three steps
namely Frame Differencing, Optical Flow and lastly, the background subtraction. A
Python program was designed and implemented in OpenCV based on the SSD
algorithm. Total of 21 objects were trained in this model using OpenCV, which is run
in the Ubuntu IDE. After successfully scanning, detecting, and tracking the video
sequence that the camera provided, the results are as shown in the figures below.
Figure 16 Figure 17
Detection of Bicycle with confidence Detection of Bus with confidence level of
level of 99.49% 98.68%
32
Figure 18 Figure 19
Detection of Train with confidence level Detection of Dog with confidence level of
of 99.99% 97.77%
The paper concluded that objects are detected using SSD algorithm in real time
scenarios. In real-time scenarios, objects are detected using the SSD algorithm.
Additionally, SSD has demonstrated results with a high level of confidence.
Because coffee quality is still comparatively poor, the state of the coffee
industry might still be considered to be hampered. It is brought on by the conventional
method of separating coffee fruit. Operator knowledge is still used in the traditional
sorting of coffee fruits, and as a result, the results of sorting are greatly dependent by
the degree of operator knowledge (Sudana et al., 2020).
33
Figure 20
HSV color model
For the materials and methods of this study, they used the Hue, Saturation and
Value (HSV) color model which is a derivative of the Red Green Blue (RGB) color
model. So, the formula calculation in finding HSV values from RGB can use the
following formula in Figure 21. Also, Figure 20 shows the visualization of HSV color
model used in this study.
Figure 21
Formula Calculation in Finding HSV Values
34
Moreover, the study also utilized the K-Nearest Neighbor (KNN) method of
classification.K-Nearest Neighbor, which was claimed first introduced by Fix and
Hodges in 1951, can also be referred to as one of the non-metric methods used in
categorization (Deole, et al., 2014; Maillo, et al., 2017). Finding the closest distance
between the data to be evaluated and the number of neighbors (k) nearest in the training
data is the basis for KNN function (Astutik, 2015). It has two phases, namely the
training phase and the classification phase.
Figure 22
Flowchart of K-Nearest Neighbor
In this study, 450 images of coffee were used, 300 of which served as training
data and 150 as test data. The stages of the developed application can be seen in Figure
23.
35
Figure 23
General Outline of the System
General description of this system has two main stages, namely the training
stage and the identification stage. First step is image acquisition of 300 coffee images
for training data. Second, resizing stage where the orientation of the image is checked
and changed to landscape orientation then proceed to the process of changing the image
size to a size of 667 x 500 pixels. Third is the conversion of HSV color space images
from RGB color space. Fourth is image processing stage where the image quality is
improved in order to recognize the target object in the image. Next, segmentation and
feature extraction which is carried out to get the feature values of the coffee image
object on each segmented HSV channel. Lastly, the classification stage which is done
to identify the type of coffee maturity based on the value of features of the coffee object
that has been processed against the training data in the database.
36
Tests are conducted on the classification approach by comparing it with a range
of neighbors (k). The k values used in the test are k=1, k=3, k=5, k=7, and k=9. Table
5 shows the comparison of KNN classification test results.
Table 5
Comparison of KNN Classification Test Result
The study concluded that the test results demonstrate that the number of
neighbors (k) of three yields the best performance in terms of percentage. The accuracy
in percentage obtained at k=3 is 95.56%.
2.6.2 Detection, classification, and mapping of coffee fruits during harvest with
computer vision
In this study, a computer vision model and algorithm are used to recognize and
categorize coffee fruits and map their maturity stages during harvest. The photos of the
coffee fruits used in this study for the image acquisition phase are frames taken from
videos recorded during the coffee harvest season from May 31 to June 06, 2020.
37
The Darknet, an open-source neural network framework written in C language,
was utilized to create the model used to identify and categorize coffee fruits. An object
detection system named You Only Look Once (YOLO) was used for the detection and
classification in this study. Because it processes images and predicts item bounding
boxes and class probabilities all in one step, this particular object detection system is
well known for its high processing speed (Redmon, et al., 2016). Figure 24 shows the
image acquisition prototype mounted onboard of the coffee harvester.
Figure 24
Lighting and image acquisition platform mounted onboard the coffee harvester
The study adopted a transfer learning technique to avoid a demand for a large
number of train images to train the object detection and classification model. The
YOLOv3-tiny model's parameters (or weights), which were pre-trained using the
COCO data set, were used to fine-tune the model. The model is better able to extract
various types of features when weights that have already been trained on a more robust
database are used. Despite the lack of classes for the stages of coffee fruit maturity in
the COCO data set, the pre-trained model's capacity to extract specific features can be
applied to the new model. Consequently, additional training was done on the pre-trained
network using photos of the research object (coffee fruits).
The labelling of the training images was performed using the YOLO mark.
The Yolo mark is a graphical user interface created for marking the bounding boxes of
38
objects of interest. According to the authors' visual classification system based on color,
the fruits were labelled.
Figure 25
(1) Precision Equation (2) Recall Equation (3) F1-Score Equation (4) Average
Precision Equation (5) Mean Average Precision Equation
The TP denoted the True Positives, FP is the False Positives, and FN the False
Negatives. In addition, the study calculated the average precision of the class (AP) and
the mean average precision (mAP) at an intersection over the union of 50%.
The experimental results of the study shows that The YOLOv3-tiny-800 model
obtained a mAP of 84%, F1-Score of 82%, the precision of 83%, and recall of 82% for
the validation set, respectively. In addition, the model presented AP of approximately
86%, 85%, and 80% in the classification of for the validation set of unripe, ripe, and
overripe, respectively. Researchers believe that there was some confusion between ripe
39
and overripe fruits at the time of classification because of the closeness of their
colors. In other words, the model occasionally produced a false positive by incorrectly
classifying the overripe fruits.
Figure 26
Two arbitrary video frames collected during the coffee harvest: (a-c) original frames
with fruits at different stages of maturation, (b-d) detection performed by the proposed
model
40
Finally, the study proved that the structure of the object detection system, based
on the architecture of the YOLOv3-tiny neural networks, are robust and
computationally-efficient. The classifications of unripe, ripe, and overripe coffee fruits
had average precisions of 86.0%, 85.2%, and 80.0%, respectively. The "tiny" version
of the YOLOv3 model has a minimal computing requirement, making it an excellent
choice for adaptation and embedding to provide reactions in real-time during the
harvest, as has also been demonstrated in other studies.
2.6.3 Classification model of ‘toraja’ arabica coffee fruit ripeness levels using
convolution neural network approach
Figure 27
Research Stages
In this study, it started from the collection of data in the form of raw images of
Arabica Toraja coffee cherry images. The image acquisition was conducted using a
smartphone camera with a resolution of 5MP. Figure 28 shows the coffee fruit samples
based on its ripeness categories.
41
Figure 28
Arabica Fruit Images (a) unripe (b) semi ripe (c) Perfectly Ripe (d) overripe
Furthermore, a total of 4000 images were collected for four categories: unripe,
semi ripe, ripe and overripe. Table 6 shows the datasets sharing of the study.
Table 6
Datasets Sharing
The study developed three different CNN Architecture models and then
implemented in python in the google collaborative application using the hardware
library and TensorFlow. During the simulations of the three CNN Architecture models,
42
it is concluded that the accuracy of Model1 was 98.25%, Model2 was 97.75%, and
Model3 was 98.75%.
2.7 Conclusion
43
Table 7
Summary of Related Studies
44
CHAPTER 3
METHODOLOGY
This chapter will be dealing with the prominent attributes of research design
such as research methodology, research instruments and the research procedures. This
chapter gives an outline of research methods that were followed in the study. The
researcher described the research design that was chosen for the purpose of this study
and the reasons for this choice. The instrument that was used for data collection is also
described and the procedures that were followed to carry out this study are included.
This study covers only one major development phase — the software
development which involves the design and implementation of the Android mobile
application in order to accurately classify coffee cherries based on its maturity level
using deep learning and object detection algorithm.
45
3.3 Research Setting
The study was conducted in a peaceful place conducive for programming and
developing the mobile application and with available internet connectivity such as the
electronics laboratory at the University of Science and Technology of Southern
Philippines. Additionally, the collection of training, validation and testing data (coffee
cherry images) was conducted at a local Robusta coffee farm located at Pigtauranan,
Pangantucan, Bukidnon. This research locale is specifically chosen since Robusta
Coffee Trees are available in the area. Also, the farmers are willing to accommodate
the study and provide sorted Robusta coffee cherry samples.
Figure 29
Research Procedure
46
3.4.1 Define the Problem, Specific Aims and Objectives
The researcher determined during the review of literature that existing similar
studies had employed non-real-time object detection and image processing algorithms.
Additionally, a similar study used the You Only Look Once version 3 (YOLOv3), an
outdated version of YOLO object detection algorithm. In light of the aforementioned
findings, the study will utilize the You Only Look Once version 5 (YOLOv5), which
provides faster processing and improved accuracy for real-time object recognition in
smartphone applications and embedded systems.
In order to help readers to better understand the overall structure and timeline of
the project, and help researcher to monitor and manage her progress, a Gantt Chart is
created as depicted in Table 8. The Gantt chart is used to illustrate the timeline of the
research project, including milestones, deadlines, and the expected duration of each
task.
47
Table 8
GANTT Chart
48
Figure 30. shows the System Architecture of the Robusta Coffee Cherry
Ripeness Classification mobile application. The mobile app is designed to be user-
friendly and straightforward.
Figure 30
System Architecture of Coffee Cherry Mobile Application
The study aims to develop a coffee cherry identification mobile application that
can accurately classify unripe, semi-ripe, ripe and overripe coffee cherries. The
swimlane diagram in Figure 31 serves as a visual tool used that illustrates the flow of
activities and interactions between different entities of the training and testing phase of
the YOLOv5 object detection model. It shows the different stages of the training
process, such as data preparation, model configuration, model training, model
evaluation, model tuning and model testing. This diagram helps visualize the
dependencies and relationships between these stages and ensure a clear understanding
of the overall training and testing procedures.
49
involves selecting and configuring the convolutional layers and defining the network
architecture. The deep learning architecture that the YOLOv5 algorithm employs is the
Convolutional Neural Network (CNN) which is specifically designed for image
processing tasks and utilize signal processing to extract hierarchical features from the
input images.
Moreover, the testing phase begins with opening the working mobile
application and start feeding input frames by detecting analog light signals and
converting it to digital images. These images are then pre-processed which include
operations such as image enhancement, noise reduction, or contrast adjustment. The
backbone network in YOLOv5, typically a deep convolutional neural network (CNN),
is responsible of extracting features that represent various visual patterns and objects
from the pre-processed frame. The neck network, an intermediate component in
YOLOv5, further refines the features extracted by the backbone network. Feature
fusion and feature aggregation are employed to combine information from multiple
levels of the backbone network, enabling the model to capture both low-level and high-
level features with enhanced contextual information. The neck network facilitates better
feature representation and improves the detection performance of the model. Lastly,
object prediction, bounding box prediction and non-maximum suppression all
encompasses the detection head of the YOLOv5 architecture.
50
Figure 31
Swimlane Diagram of the Mobile Application
A. IMAGE ACQUISITION
This proposed study utilized 1200 training data images and 400 validation data
images of coffee cherries. Table 9 shows the datasets sharing of the study. On the other
hand, during the testing phase, the researcher utilized the camera mobile application,
developed via Android Studio and TensorFlow, to collect test data frames.
51
Table 9
Dataset Sharing
NO. OF
NO. OF TRAINING NO. OF TESTING
CLASSIFICATION VALIDATION
DATA DATA
DATA
Unripe 300 100 100
Semi-ripe 300 100 100
Ripe 300 100 100
Overripe 300 100 100
During the training phase, the researcher used makesense.ai website to annotate
the training images used to train and teach the model the right classification of coffee
cherries according to its maturity level. The training datasets that are used are based on
the sample of Robusta coffee cherries categorized and sorted according to its maturity
level (unripe, semi-ripe, ripe, overripe) by a local farm at Pigtauranan, Pangantucan,
Bukidnon.
52
because it has competitive speed and accuracy compared to other algorithms.
Moreover, this architecture is straightforward and easy to implement.
Figure 32
Model Architecture of Single-Stage Detector (YOLO)
Neck network. The neck network is responsible for combining features from
different scales to create a single feature map that is used for object detection. This
makes it easier for the model to generalize to objects of various sizes and scales.
53
that are used to predict the location and size of objects in the image. The anchor boxes
are defined at different scales and aspect ratios to improve detection accuracy.
As soon as the issue was discovered during the beginning stages of this study,
the researcher was able to construct and suggest a framework. The software and
libraries that the researcher requires to develop the software are shown below.
Figure 33
Android Studio Logo
3.4.4.2 Anaconda
Anaconda is a popular open-source platform for data science and scientific
computing. It is designed to simplify the process of installing and managing
packages and dependencies required for data analysis and machine learning.
Anaconda includes Jupyter Notebook, a web-based interactive computing
environment that allows users to create and share documents that contain live
code, equations, visualizations, and narrative text.
54
Figure 34
Anaconda Logo
Figure 35
Jupyter Notebook Logo
55
Figure 36
TensorFlow Logo
3.4.5 Testing
The model is tested and evaluated in a functional Robusta coffee cherry ripeness
classification mobile application and determine the possible adjustments for the system.
The researcher then evaluates whether the proposed mobile application successfully
detected and classified coffee cherries based on its maturity levels. Figure 37 shows the
experimental setup of the study.
Figure 37
Experimental Setup
12-15 inches
The implementation and testing of the coffee cherry mobile application was
conducted in an indoor setup as shown in Figure 38. The phone is placed 12-15 inches
above the surface. The surface can be a “nigo” which is mostly used by the local coffee
farmers for sorting the coffee cherries. The first test is done through individual
placement of coffee cherries into the “nigo” within the camera frame for detection. This
56
is to determine the True Positives, False Positives, False Negatives, and True Negatives.
With this gathered data, the researcher will be able to compute the Precision, Recall,
AP, Specificity, F-1 Score, False Detection, Missed Detection, mAP, and Accuracy of
the mobile application.
Figure 38
Implementation
The second test is done through placing several coffee cherries with different
maturity levels into the “nigo” and capturing it in one frame. This is to test whether the
Robusta Coffee Cherry Ripeness Classification mobile application is still performing
accurately and consistently in three different trials: (1) 20 test objects, (2) 50 test objects,
and (3) 100 test objects in one frame.
Data is collected during the testing phase of the proposed Robusta Coffee
Cherry Ripeness Classification mobile application. The first test is done through
individual placement of Robusta coffee cherries into the “nigo” to determine the
number of True Positives, True Negatives, False Negatives and False Positives. Figure
39 shows a sample testing and implementation of the first test.
57
Figure 39
Individual Placement and Detection of Robusta Coffee Cherries Using the Proposed
Mobile Application
The data gathered during this test is analyzed and evaluated using the confusion
matrix. A Confusion matrix is an N x N matrix used to assess the effectiveness of a
classification model, where N is the total number of target classes. In the matrix, the
actual target values are compared with those that the machine learning model predicted.
This will be demonstrated in Table 10.
58
Table 10
Confusion Matrix Sample Table
CONFUSION MATRIX
PREDICTED CONDITIONS
TOTAL TEST OBJECTS POSITIVE NEGATIVE
TRUE POSITIVE True Positive False Positive
CONDITION NEGATIVE False Negative True Negative
True Positive (TP). The actual condition matches the predicted condition.
False Positive (FP). The actual condition was falsely predicted. This happens when
the mobile application determines something as positive when it is actually negative.
False Negative (FN). The actual condition is positive yet the mobile application
predicted it as negative. It fails to identify the presence of the object.
True Negative (TN). Both actual and predicted conditions are negative.
To further elaborate these conditions, the figures below present the concept of
a “true positive”, “true negative”, “false negative” and “false positive” in the context of
a ripe coffee cherry.
Figure 40 Figure 41
True Positive False Negative
Ripe
Figure 42 Figure 43
False Positive True Negative
59
In Figure 40. it shows that the model accurately detected the ripe coffee cherry,
meaning it’s “True Positive”. On the other hand, Figure 41. shows that the model did
not detect any ripe coffee cherry in the image even though there is actually a ripe coffee
cherry present on it, therefore it is a “False Negative”. Figure 42. shows that the model
inaccurately detected and classified a semi-ripe coffee cherry as a ripe coffee cherry,
therefore it is “False Positive”. Lastly, Figure 43 consists of only one unripe coffee
cherry and the model did not classify it as ripe therefore it is “True Negative”.
Table 11
Equations of Figure of Merits Used in Evaluating the Proposed Mobile Application
𝑇𝑃
Precision
𝑇𝑃 + 𝐹𝑃
𝑇𝑃
Recall/Sensitivity
𝑇𝑃 + 𝐹𝑁
𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛∗𝑟𝑒𝑐𝑎𝑙𝑙
F1-Score 2 (𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑟𝑒𝑐𝑎𝑙𝑙)
𝑇𝑁
Specificity
𝑇𝑁 + 𝐹𝑃
𝐹𝑁
Missed Detection
𝑇𝑃 + 𝐹𝑁
𝐹𝑃
False Detection
𝑇𝑃 + 𝐹𝑃
𝑇𝑃 + 𝑇𝑁
Accuracy
𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁
1
Average Precision (AP) ∫0 𝑃(𝑟)𝑑𝑟
60
𝑁
Precision. Precision is the ratio of true positives to the total number of predicted
positives (true positives + false positives). It measures the proportion of predicted
positive samples that are actually positive. High precision indicates that the model is
making fewer false positive predictions.
Recall/Sensitivity. Recall is the ratio of true positives to the total number of
actual positives (true positives + false negatives). It measures the proportion of actual
positive samples that are correctly predicted by the model. High recall indicates that the
model is making fewer false negative predictions.
F1-Score. F1-Score is the harmonic mean of precision and recall. It provides a
balance between precision and recall and is a useful metric when both false positives
and false negatives are important. The F1-Score ranges from 0 (worst) to 1 (best).
Specificity. Specificity is the ratio of true negatives to the total number of actual
negatives (true negatives + false positives). It measures the proportion of actual
negative samples that are correctly predicted by the model.
Missed Detection. Missed detection is the number of true positive samples that
are incorrectly predicted as negative by the model. It is also known as false negatives.
False Detection. False detection is the number of false positive samples
predicted by the model. It is also known as false positives.
Accuracy. Overall accuracy is a figure of merit that is commonly used in
classification tasks to evaluate the performance of a model. It is defined as the
percentage of correctly classified samples among all samples in the dataset.
Average Precision (AP). AP is a measure of the precision of the model at
different levels of recall. To calculate AP, the model's precision is plotted against recall,
and the area under the precision-recall curve (PR curve) is calculated. A higher AP
score indicates that the model has better precision-recall trade-offs.
Mean Average Precision (mAP). mAP is the mean of AP scores across
multiple classes or categories. In object detection tasks, there are usually multiple
classes that the model needs to detect. For each class, AP is calculated based on the
61
model's predictions for that class, and then the AP scores are averaged across all the
classes. mAP provides an overall evaluation of the model's performance across all the
classes.
Additionally, the following gathered data during the second test is tabulated and
analyzed as shown in Table 12.
Table 12
Sample Table for Data Gathering in Several Coffee Cherries in a Frame
TRIAL 1
TOTAL TEST OBJECT (20)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 5 - -
2 Semi-ripe 5 - -
3 Ripe 5 - -
4 Overripe 5 - -
TRIAL 2
TOTAL TEST OBJECT (50)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 12 - -
2 Semi-ripe 13 - -
3 Ripe 13 - -
4 Overripe 12 - -
TRIAL 3
TOTAL TEST OBJECT (100)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 25 - -
2 Semi-ripe 25 - -
3 Ripe 25 - -
4 Overripe 25 - -
With this, the researcher is able to analyze whether the percentage of correct
detection and classification of Robusta coffee cherries increases, decreases or
consistent throughout the three trials with the increasing number of test objects in one
frame.
62
CHAPTER 4
RESULTS AND DISCUSSIONS
This chapter discusses the data analysis and findings that are accumulated from
the experiment that the researcher conducted. In Chapter 3, the methodology used in
the study was described in detail, including the sampling procedure, data collection
methods, and statistical analyses. The research problem that guided the study is:
1. Local coffee farmers in Bukidnon still do not have access to any technology
that can classifying Robusta coffee cherries according to its maturity levels
without the need for internet connectivity.
Figure 44
Working Robusta Coffee Cherry Ripeness Classification Mobile Application
63
In order to acquire the necessary data for the performance evaluation of the
proposed mobile application, the researcher conducted an experiment. The procedure
begins with preparing the test samples of Robusta coffee cherries in four different
maturity levels and setting up the phone stand so that the phone must be 12-15 inches
above the “nigo”. Next, the researcher simply opens the proposed mobile application
installed in an Android phone and starts placing coffee cherries in the “nigo” one at a
time. Figure 45 shows the experimental procedure.
Figure 45
Experimental Procedure and Data Gathering
64
essential to determine whether the proposed mobile application’s performance
increases or decreases as the number of coffee cherries present in one frame increases.
Figure 46
Performance of Training Phase
To determine the correct classifications of the model, the diagonal cells of the
matrix present the number of correct classifications. It is shown in Figure 46 that all
65
four classes of maturity levels (unripe, semi-ripe, ripe, overripe) have high numbers of
True Positives. Consequently, “Unripe” has 93% correct classifications, “Semi-Ripe”
has 96% correct classifications, “Ripe” has also 97% correct classifications, and lastly
“Overripe” has 80% correct classifications. Overall, it is evident that the algorithm
performs relatively well in predicting all four classes.
Figure 47
Precision-Confidence Curve
Precision and recall are two metrics that are commonly used to evaluate the
performance of a classification model. Both metrics are significant indicators of how
well the model can identify positive instances. Precision measures the proportion of
true positive predictions among all positive predictions. It answers the question, "Of all
the instances that the model predicted as positive, how many were actually positive?"
A high precision value indicates that the model is good at identifying positive instances,
but it may miss some positive instances in the process. Figure 47 illustrates the
relationship between precision and the confidence threshold. The graph demonstrates
that the precision of the model increases with the confidence threshold.
66
Figure 48
Recall-Confidence Curve
Recall measures the proportion of true positive predictions among all actual
positive instances. It answers the question, "Of all the instances that were actually
positive, how many did the model correctly identify as positive?" A high recall value
indicates that the algorithm is good at identifying all positive instances, but it may also
produce many false positive predictions. Figure 48 shows the relationship between
recall and the confidence threshold. The graph demonstrates that as the confidence
threshold increases, the model’s recall decreases.
Figure 47 and Figure 48 only proves that there is a trade-off between precision
and recall. A balance between precision and recall should be sought to find an optimal
value for each metric which indicates that the model has fewer false positives and
higher fewer false negatives.
67
Figure 49
Precision-Recall Area Under Curve (AUC)
The tradeoff between precision and recall for various thresholds is depicted by
the precision-recall curve as shown in Figure 49. High precision is correlated with a
low false positive rate, while high recall is correlated with a low false negative rate. In
Figure 49, it shows several AUC for each classification. The thin blue line which
represents the “Unripe” class has an AUC value of 0.980. The orange line, on the other
hand, represents the “Semi-Ripe” class and has an AUC value of 0.971. The “Ripe”
class is represented by the green line with an AUC value of 0.961. Lastly, the “Overripe”
class with red line has the lowest AUC value of 0.915 among the other classes. In
addition, the thick blue line represents the overall AUC value of 0.957 for all four
classes. The precision-recall AUC values can help compare the overall performance of
the model for different classes and identify which classes may require further
improvement in the model.
68
Figure 50 shows that on the training image dataset, the researcher trained the YOLOv5 algorithm for 10 epochs. During training,
the researcher tracked several metrics to evaluate the performance of the model. Furthermore, the model is evaluated on a separate
validation set of images. The same metrics to evaluate the performance of the model was also tracked during validation. These metrics
include the following:
Figure 50
Loss and mAP results after training the YOLOv5 model for 10 epochs
69
TRAINING RESULTS
YOLOv5 loss function is composed of three parts: (1) box_loss, (2) obj_loss,
and (3) cls_loss. These three loss functions are combined to get the total loss value of
the model. Firstly, the trajectory of the loss function used to calculate the localization
error of predicted bounding boxes is shown in the train/box_loss graph. The
train/box_loss value of 0.024727 in the last iteration indicates that the model is able to
accurately predict the locations of objects in the input images with relatively low error.
Secondly, the train/cls_loss graph shows how the loss function, which calculates the
classification error of the predicted object classes, has changed over time. The graph
shows that the model's train/cls_loss value of 0.0050826 indicates that it can accurately
classify the objects it detects. Lastly, the train/obj_loss graph displays the evolution
of the loss function, which measures the certainty of the predicted bounding boxes. The
model can confidently identify objects in the input photos, as illustrated in the graph
with a train/obj_loss value of 0.011186 in the last iteration.
70
VALIDATION RESULTS
Overall, these results suggest that the YOLOv5 model is performing reasonably
well on both the training and validation datasets, with low losses and high precision
values. However, as with the training results, these results should be interpreted with
caution because the model's performance may differ depending on the specific
characteristics of the dataset and task being evaluated. Additional evaluation and testing
may be necessary to fully assess the performance of the model.
71
Figure 51
F1-Confidence Curve
The F1 curve plots the F1 score against different classification thresholds. The
x-axis represents the classification threshold, which is a value between 0 and 1 that
determines how confident the model needs to be before predicting a positive class label.
The y-axis represents the F1 score, which is a harmonic mean of precision and recall.
72
Table 13
Labels Vs. Predictions of Validation Dataset
LABELS PREDICTIONS
73
Table 13 shows some examples of the validation dataset and compares the ground
truth (true labels) of the Robusta coffee cherries versus the predicted maturity level of
the YOLOv5 model. In the first column, it shows the true maturity levels of validation
images based on the samples provided by a local farm in Pigtauranan, Pangantucan,
Bukidnon. While the second column shows the predicted maturity levels of Robusta
coffee cherries that the YOLOv5 predicted. It also included the confidence level of the
model with its prediction.
Table 14
Confusion Matrix
PREDICTIONS
LABELS
The confusion matrix in Table 14 shows the performance of the Robusta Coffee
Cherry Ripeness Classification mobile application in classifying four different maturity
levels of Robusta coffee cherries: Unripe, Semi-ripe, Ripe, Overripe. The rows
represent the true labels, while the columns represent the predicted labels.
74
4. The highest True Positive count class of coffee cherry maturity level is
“Unripe” with 94 correct classifications.
These results can be used to evaluate the performance of the Robusta Coffee
Cherry Ripeness Classification mobile application in classifying the four maturity
levels of coffee cherries. The confusion matrix provides a detailed breakdown of the
true and predicted labels, allowing the researcher to identify which classes the proposed
mobile app struggles with the most.
Table 15
Table of Predictions During Mobile Application Testing
ANALYSIS
False True
Labels True Positives False Positives
Negatives Negatives
Unripe 94 0 6 300
Semi-Ripe 90 11 10 289
Ripe 92 13 8 287
Overripe 89 11 11 289
To determine the four important counts for the analysis of object detection and
classification algorithms, it is important to note that:
75
The researcher was able to extrapolate the TP, TN, FP, and FN counts for each
class from Table 14, and these are presented in Table 15. As shown in Table 15, the
Unripe class has 94 TP, 0 FP, 6 FN, and 300 TN. The Semi-ripe class has 90 TP, 11 FP,
10 FN, and 289 TN. For the Ripe class, it has 92 TP, 13 FP, 8 FN, and 287 TN. Lastly,
the Overripe class has 89 TP, 11 FP, 11 FN, and 289 TN. Consequently, the researcher
can calculate the metrics listed in Table 16 using these data.
Table 16
Table of Evaluation Metrics for the Proposed Mobile Application
ANALYSIS
mAP F-1 False Missed
Class Recall Precision AP Specificity Accuracy
@ 0.5 Score Detection Detection
Ripe 92.00% 87.62% 96.10% 95.70% 95.67% 89.76% 12.38% 8.00% 94.75%
Precision is the ratio of true positive predictions among all positive predictions,
while recall is the ratio of true positives among all actual positives. AP (average
precision) is the area under the precision-recall curve, which combines precision and
recall to measure the overall performance of the mobile application for a particular class.
mAP (mean average precision) is the average of the AP values for all classes.
76
Looking at Table 16, it is shown that the “Unripe” class has the highest recall,
precision and AP, indicating that the mobile app performs well for this class. “Overripe”
class has the lowest recall, precision and AP, indicating that the mobile app has more
difficulty identifying this class. The mAP for all classes is 95.70%, which means that
the overall performance of the mobile application is relatively high. The specificity for
all classes is high, indicating that the mobile app is good at identifying negative
instances. The missed detection and false detection rates are also relatively low for all
classes, indicating that the mobile app makes relatively few errors in classifying
instances. Finally, the overall accuracy for all classes is 0.9563, which means that the
classifier is correct in its predictions around 95.63% of the time.
Figure 52
Evaluation Metrics Box Plot for Each Class
A box plot is presented in order to provide the researcher and the readers a visual
representation of the distribution of results of evaluation metrics shown in Table 16.
These metrics include Precision, Recall, AP, Specificity, F-1 Score, and Accuracy for
each class. By comparing the box plots side-by-side, researcher can quickly see
differences in the center, spread, and skewness of the metric values. The box plot shows
that the proposed mobile application performs best in classifying unripe coffee cherries
compared to the other three classes. The plot also shows that the lowest metric value is
77
the precision of identifying ripe coffee cherries which indicates that future
improvements should be done to enhance its precision value. Moreover, it suggests that
there are no “outliers” in the evaluation results of four classes indicating that all
evaluation metrics are relatively high for all four classes.
Table 17
Trials 1 to 3 Percentage Score Results
TRIAL 1
TOTAL TEST OBJECT (20)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 5 3 60.00%
2 Semi-ripe 5 5 100.00%
3 Ripe 5 4 80.00%
4 Overripe 5 1 20.00%
TRIAL 2
TOTAL TEST OBJECT (50)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 12 8 66.67%
2 Semi-ripe 13 11 84.62%
3 Ripe 13 9 69.23%
4 Overripe 12 1 8.33%
TRIAL 3
TOTAL TEST OBJECT (100)
No. IDENTIFICATION %
CLASSIFICATION TOTAL
1 Unripe 25 15 60.00%
2 Semi-ripe 25 19 76.00%
3 Ripe 25 9 36.00%
4 Overripe 25 0 0.00%
Table 17 shows the performance results of the YOLOv5 model in three different
trials. The model's percentage scores for each classification were computed for every
trial. From the table, it is shown that the model had the highest percentage score in Trial
1 for all four classes, with a score of 60% Unripe, 100% Semi-ripe, 80% Ripe and 20%
Overripe. However, the model’s performance declines as the total test subjects
increases. For the 2nd trial the model’s percentage scores of Unripe coffee cherries is
higher compared to trial 1 with 66.67% percentage score. On the other hand, other
78
coffee cherries’ maturity levels have a score of 84.62% Semi-ripe, 69.23% Ripe, and
8.33% Overripe. The scores further declined in Trial 3 where the scores are 60% Unripe,
76% Semi-ripe, 36% Ripe, and 0% Overripe. This only indicates that as the user
increases the number of test objects in one frame, the performance of the model
decreases in terms of correct identification and classification of Robusta coffee cherries’
maturity levels. Data from Table 17 is illustrated and summarized in the graph below:
Figure 53
Line Graph of Trials 1, 2 and 3 Percentage Scores
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
TRIAL 1 TRIAL 2 TRIAL 3
Figure 53 shows the trend of percentage scores of Unripe, Semi-ripe, Ripe, and
Overripe coffee cherries in three trials. The graph illustrates a clear downward trend of
percentage scores from Trials 1 to 3 of Semi-ripe, Ripe and Overripe coffee cherries.
However, Unripe coffee cherries did not follow this trend. It has a percentage score
from Trials 1, 2 and 3 of 60%, 66.67% and 60% respectively. Moreover, the graph
shows that Semi-ripe has the highest percentage scores for all three trials compared to
other maturity levels. While Overripe has the lowest percentage scores from Trial 1 to
Trial 3 which reached 0% correct identification.
79
CHAPTER 5
CONCLUSION AND RECOMMENDATIONS
This chapter covers four sections. A general synopsis of the study is provided
in the first section, which is followed by a summary of the findings and their
conclusions. Subsequent to this are the implications of the study and followed by
recommendations for additional investigation.
Coffee is a staple beverage for the Filipinos. Ernie Masceron, senior vice
president and head of corporate affairs for Nestlé Philippines, claimed that Filipinos are
avid coffee consumers. According to Masceron (2018), around 21 million cups of
coffee are consumed daily in the Philippines. Filipino coffee farmers, however, are
unable to meet the nation's growing demand for coffee products. Since it requires a lot
of effort and investment in terms of time, labor, and money, Filipino farmers are still
hesitant to invest in the coffee industry. Furthermore, in order to ensure that the farmers
only select ripe coffee cherries for picking, optimal harvesting practices require in-
depth knowledge and skills. With this, the objective of the study is to develop a mobile
application that is capable of detection and accurate classification of unripe, semi-ripe,
ripe, overripe coffee cherries.
• The algorithm performs relatively well during the training phase with a
true positive count for Unripe, Semi-ripe, Ripe and Overripe class of
94%, 90%, 92% and 89% respectively.
80
• The Precision-Recall Curve shows that all four classes have both high
precision and recall using the validation dataset obtaining an AP of 0.980,
0.971, 0.961, and 0.915 for Unripe, Semi-ripe, Ripe, and Overripe class,
respectively.
• The algorithm’s train/box_loss value is 0.024727, while train/cls_loss
value is 0.0050826, and the train/obj_loss value is 0.011186. The graph
shows that the losses decrease with each iteration which suggests that
the algorithm is learning well during training.
• The metrics/precision graph and the metrics/recall graph show the
performance of the algorithm during training in terms of precision and
recall. It obtained a precision value of 93.84% and a recall value of
95.315% in the last iteration.
• The model’s val/box_loss value is 0.024664, while val/cls_loss value is
0.0063864, and the train/obj_loss value is 0.0042132. The graph shows
that the losses decrease with each iteration which suggests that the model
still performs well with a new unseen dataset.
• Figure 51 shows that the F1 score of 0.92 peaks at a confidence threshold
of around 0.654 indicates that this confidence threshold provides the best
balance between precision and recall.
• During the first experimental setup, it was observed that the proposed
mobile app correctly classified 94 Unripe, 90 Semi-ripe, 92 Ripe, and
89 Overripe.
• The proposed mobile app incorrectly classified 6 Unripe, 10 Semi-ripe,
8 Ripe, and 11 Overripe.
• The most commonly misclassified maturity level was the Overripe, with
8 Overripe coffee cherries being incorrectly classified as Ripe, and 3 as
Semi-ripe.
• The highest True Positive count class of coffee cherry maturity level is
“Unripe” with 94 correct classifications
• The “Unripe” class has the highest recall (94%), precision (100%) and
AP (98%), indicating that the mobile app performs well for this class.
81
• The “Overripe” class has the lowest recall (89%), precision (89%) and
AP (91.50%), indicating that the mobile app has more difficulty
identifying this class.
• The mAP for all classes is 95.70%, which means that the overall
performance of the mobile application is relatively high.
• The specificity for all classes is high, indicating that the mobile app is
good at identifying negative instances.
• The false detection and missed detection for all classes is relatively low,
indicating that the mobile app makes relatively few errors in classifying
instances.
• The overall precision (91.43%), recall (91.25%), mAP (95.70%),
Specificity (97.08%), F1-score (91.30%), false detection (8.57%),
missed detection (8.75%) and accuracy (95.63%) of the proposed mobile
application indicates that it is performing well during the first test
experimental setup.
• Figure 53 illustrates a clear downward trend of percentage scores from
Trials 1 to 3 of Semi-ripe, Ripe and Overripe coffee cherries, indicating
that as the number of mixed-class coffee cherries present in one frame
increases, the performance of the model decreases.
5.3. Conclusion
The mobile app has the potential to be used in coffee industry, especially for the
local coffee farms. The findings of this study can benefit several sectors such as the
Department of Agriculture, Filipino coffee farmers, ECE professionals, and future
researchers.
82
The Robusta Coffee Cherry Ripeness classification mobile application's
successful development opens up new opportunities for improving harvesting
efficiency, optimizing quality control, and facilitating decision-making processes in the
coffee farming industry. However, there are limitations to this study that should be
taken into account. Future work will include expanding the dataset, refining the
classification model, and incorporating additional features such as implementing the
Robusta Coffee Cherry Ripeness Classification model in microcontrollers to include an
automatic sortation system, which will reduce coffee farmers' workload and provide a
comprehensive solution for them and industry stakeholders.
Overall, this mobile app has the potential to revolutionize the Philippine coffee
industry and make the coffee cherry harvesting and sorting process more advanced and
systematized.
5.4. Recommendations
Based on the findings and conclusions of the study, the researcher would like to
recommend that:
1. Future researchers further investigate the causes behind the decline of the
mobile application’s performance as the total number of test objects present in
one frame increases.
2. The model’s ability to identify and classify multiple coffee cherries in a single
frame should be improved by future studies.
3. Future researchers should add classification of other coffee varieties,
particularly the ones cultivated in the country.
4. Future researchers should consider implementing the Robusta Coffee Cherry
Ripeness Classification model in microcontrollers and including other features
such as automated sortation system that will lessen the workload of coffee
farmers and increase the efficiency of the harvesting and sorting process.
83
REFERENCES
A. (2019, June 6). Types of Coffee that Grow in the Philippines. Types of Coffee That
Grow in the Philippines. Retrieved November 25, 2022, from
https://caffeinebrothers.co/types-of-coffee-that-grow-in-the-philippines/
Araneta, A., Carrasco, B., Rahemtulla, H., Balgos, S., & Sy, S. (2021, February
22). Mapping Digital Poverty in ph. INQUIRER.net.
https://business.inquirer.net/318223/mapping-digital-poverty-in-ph
Bazame, Helizani & Molin, Jose & Althoff, Daniel & Martello, Maurício. (2021).
Detection, classification, and mapping of coffee fruits during harvest with computer
vision. Computers and Electronics in Agriculture. 183. 106066.
10.1016/j.compag.2021.106066.
Beller. (2001, June 18). How Coffee Works. Processing Cherries - How Coffee Works
| HowStuffWorks. Retrieved November 25, 2022, from
https://science.howstuffworks.com/innovation/edible-innovations/coffee.htm
Benmetan, T. (2018, January 27). The Coffee Belt, A World Map of the Major Coffee
Producers | Seasia.co. Good News From Southeast Asia. Retrieved November 25,
2022, from https://seasia.co/2018/01/27/the-coffee-belt-a-world-map-of-the-major-
coffee-producers
Brennan, S. (2021, August 31). Philippine Coffee: History, Flavors & Brewing Tips -
Coffee Affection. Coffee Affection. Retrieved November 25, 2022, from
https://coffeeaffection.com/philippine-coffee-guide/
Caretti. (2016). The process of coffee production: from seed to cup. New Food
Magazine. Retrieved November 25, 2022, from
https://www.newfoodmagazine.com/article/28006/process-coffee-production-seed-
cup/
Dalvi, L. P., Sakiyama, N. M., Andrade, G. S., Cecon, P. R., Pereira de Silva, F. A.,
and Oliveira, L. S. G., “Coffee Production through Wet Process: Ripeness and Quality,”
Academic Journals: African Journal or Agricultural Research, 12(36), 2783-
2787(2017).
84
Department of Agriculture Philippines. (2015). PHILIPPINE NATIONAL
STANDARD. Code of Good Agricultural Practices for Coffee.
https://bafs.da.gov.ph/bafs_admin/admin_page/pns_file/2021-02-24-PNSBAFS169-
2015CodeofGoodAgriculturalPracticesGAPforCoffee.pdf
G. Chandan, A. Jain, H. Jain and Mohana, "Real Time Object Detection and Tracking
Using Deep Learning and OpenCV," 2018 International Conference on Inventive
Research in Computing Applications (ICIRCA), Coimbatore, India, 2018, pp. 1305-
1308, doi: 10.1109/ICIRCA.2018.8597266.
G. Yang et al., "Face Mask Recognition System with YOLOV5 Based on Image
Recognition," 2020 IEEE 6th International Conference on Computer and
Communications (ICCC), Chengdu, China, 2020, pp. 1398-1404, doi:
10.1109/ICCC51575.2020.9345042.
Herrera Pérez, Jean Carlos, Medina Ortiz, Silfri Manuel, Martínez Llano, Gabriel
Enrique, Beleño Sáenz, Kelvin de Jesús, & Berrio Pérez, Julie Stephany. (2016).
Clasificación de los frutos de café según su estado de maduración y detección de la
broca mediante técnicas de procesamiento de imágenes. Prospectiva, 14(1), 15-22.
https://doi.org/10.15665/rp.v14i1.640
Horvat, Marko & Jelečević, Ljudevit & Gledec, Gordan. (2022). A comparative study
of YOLOv5 models performance for image localization and classification.
Hui, J. (2020, December 15). SSD object detection: Single Shot MultiBox Detector for
real-time processing. Medium. Retrieved November 25, 2022, from https://jonathan-
hui.medium.com/ssd-object-detection-single-shot-multibox-detector-for-real-time-
processing-9bd8deac0e06
Martinez-Alpiste, Ignacio & Golcarenarenji, Gelayol & Wang, Qi & Alcaraz Calero,
Jose. (2022). Smartphone-based real-time object recognition architecture for portable
and constrained systems. Journal of Real-Time Image Processing. 19. 10.1007/s11554-
021-01164-1.
85
Mermelstein, N. H. (2012, January). Coffee Quality Testing. IFT.org. Retrieved
November 25, 2022, from https://www.ift.org/news-and-publications/food-
technology-magazine/issues/2012/january/columns/food-safety-and-quality
P. (2019, July 16). Coffee Farming And Cultivation. Pinoy Bisnes Ideas. Retrieved
November 25, 2022, from https://www.pinoybisnes.com/agri-business/coffee-farming-
and-cultivation/
Pacas, J. A. (2016, February 24). Coffee Science: How Can We Identify & Improve
Cherry Ripeness? Coffee Science: How Can We Identify & Improve Cherry Ripeness?
- Perfect Daily Grind. Retrieved November 25, 2022, from
https://perfectdailygrind.com/2016/02/coffee-science-how-can-we-identify-improve-
cherry-ripeness/
Paulo Felipe, C. J. (2022, April 18). Keeping the coffee industry alive in Kalinga.
Special Area for Agricultural Development. Retrieved November 25, 2022, from
https://saad.da.gov.ph/2022/04/keeping-the-coffee-industry-alive-in-kalinga
Perez, V. O., Perez, L. G. M., Alduenda, M. R. F., Barreto, C. I. A., Agudelo, C. P. G.,
and Restrepo, E. C. M., “Chemical Composition and Sensory Quality of Coffee Fruits
at Different Stages of Maturity,” Agronomy, 13(2), 1-15(2023)
Philippine Coffee Board, Inc. (2022, October 2). Philippine Coffee - Philippine Coffee
Board. Philippine Coffee Board. Retrieved November 25, 2022, from
https://philcoffeeboard.com/philippine-coffee/
Plant Village. (n.d.). Coffee | Diseases and Pests, Description, Uses, Propagation.
Coffee | Diseases and Pests, Description, Uses, Propagation. Retrieved November 25,
2022, from https://plantvillage.psu.edu/topics/coffee/infos
86
Pushpins Consulting Company. (2020, August 31). The Different Types of Philippine
Coffee--and How To Brew That Perfect Cup. Pushpins. Retrieved November 25, 2022,
from https://www.pushpins.com.ph/different-types-of-philippine-coffee/
Sudana, Oka & Jacob, Deden & Putra, Adhitya & Raharja, I Made. (2020). Mobile
Application for Identification of Coffee Fruit Maturity using Digital Image Processing.
International Journal on Advanced Science, Engineering and Information Technology.
10. 980. 10.18517/ijaseit.10.3.11135.
Tan. (2021, October 5). Revised Philippine Coffee Roadmap hopes to revitalize local
coffee industry. Manila Bulletin. Retrieved November 25, 2022, from
https://mb.com.ph/2021/10/05/revised-philippine-coffee-roadmap-hopes-to-revitalize-
local-coffee-industry/
Trevisan. (2018, August 28). What are the factors that affect coffee quality? What Are
the Factors That Affect Coffee Quality? Retrieved November 25, 2022, from
https://blog.eureka.co.it/en/factors-that-affect-coffee-quality
Y. -Q. Ou, C. -H. Lin, T. -C. Huang and M. -F. Tsai, "Machine Learning-based Object
Recognition Technology for Bird Identification System," 2020 IEEE International
Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), Taoyuan, Taiwan,
2020, pp. 1-2, doi: 10.1109/ICCE-Taiwan49838.2020.9258061.
Yusibani, E., Putra, R. I., Rahwanto, A., Surbakti, M. S., Rajibussalim, and Rahmi,
“Physical Properties of Sidikalang Robusta Coffee Beans Medium Roasted from
Various Colors of Coffee Cherries,” Journal of Physics, 2243(2022).
87
APPENDIX A
Android Phone Specifications
Huawei Nova 3i
Specifications
Technology GSM / HSPA / LTE
2G Bands GSM 850 / 900 / 1800 / 1900 - SIM 1 & SIM 2
3G Bands HSDPA 850 / 900 / 2100
NETWORK
4G Bands 1, 3, 5, 7, 8, 28, 38, 40, 41
HSPA 42.2/5.76 Mbps, LTE-A (3CA) Cat12
Speed
600/150 Mbps
Announced 2018, July 18
LAUNCH
Status Available. Released 2018, July 27
Dimensions 157.6 x 75.2 x 7.6 mm (6.20 x 2.96 x 0.30 in)
Weight 169 g (5.96 oz)
BODY
Build Glass front, plastic back, plastic frame
SIM Hybrid Dual SIM (Nano-SIM, dual stand-by)
Type IPS LCD
6.3 inches, 97.4 cm2 (~82.2% screen-to-body
Size
DISPLAY ratio)
1080 x 2340 pixels, 19.5:9 ratio (~409 ppi
Resolution
density)
88
OS Android 8.1 (Oreo), upgradable to Android 9.0
(Pie), EMUI 9.0
PLATFORM Chipset Kirin 710 (12 nm)
Octa-core (4x2.2 GHz Cortex-A73 & 4x1.7
CPU
GHz Cortex-A53)
GPU Mali-G51 MP4
Card Slot microSDXC (uses shared SIM slot)
MEMORY 64GB 4GB RAM, 128GB 4GB RAM, 128GB
Internal
6GB RAM
16 MP, f/2.2, PDAF
Dual
MAIN 2 MP, (depth)
CAMERA Features LED flash, HDR, panorama
Video 1080p@30fps
24 MP, f/2.0, 26mm (wide), 1/2.8", 0.9µm
Dual
SELFIE 2 MP, depth sensor
CAMERA Features HDR
Video 1080p@30fps
Loudspeaker Yes
SOUND
3.5mm Jack Yes
WLAN Wi-Fi 802.11 b/g/n, Wi-Fi Direct
Bluetooth 4.2, A2DP, LE, EDR, aptX HD
Positioning GPS, GLONASS, BDS
COMMS
NFC No
Radio FM Radio
USB microUSB 2.0, OTG
Fingerprint (rear-mounted), accelerometer,
FEATURES Sensors
gyro, proximity, compass
Type Li-Ion 3340 mAh, non-removable
BATTERY
Charging 10W wired
89
APPENDIX B
Codes and Scripts
90
91
92
93
B.2. Android Studio labelFilename =
"file:///android_asset/customclasses.txt";
Detector Factory.java isQuantized = true;
inputSize = 416;
output_width = new int[]{40, 20, 10};
package
masks = new int[][]{{0, 1, 2}, {3, 4, 5},
org.tensorflow.lite.examples.detection.tflite;
{6, 7, 8}};
anchors = new int[]{
import android.content.res.AssetManager;
10,13, 16,30, 33,23, 30,61, 62,45,
59,119, 116,90, 156,198, 373,326
import java.io.IOException;
};
}
public class DetectorFactory {
public static YoloV5Classifier getDetector(
org.tensorflow.lite.examples.detection.tflite.Y
final AssetManager assetManager,
oloV5Classifier yoloV5Classifier =
final String modelFilename)
YoloV5Classifier.create(assetManager,
throws IOException {
modelFilename, labelFilename, isQuantized,
String labelFilename = null;
inputSize);
boolean isQuantized = false;
return yoloV5Classifier;
int inputSize = 0;
}
int[] output_width = new int[]{0};
int[][] masks = new int[][]{{0}};
}
int[] anchors = new int[]{0};
if AndroidManifest.xml
(modelFilename.equals("yolov5s.tflite")) {
labelFilename = <manifest
"file:///android_asset/customclasses.txt"; xmlns:android="http://schemas.android.com/
isQuantized = false; apk/res/android"
inputSize = 416;
output_width = new int[]{80, 40, 20}; package="org.tensorflow.lite.examples.detect
masks = new int[][]{{0, 1, 2}, {3, 4, 5}, ion">
{6, 7, 8}}; <!-- Tell the system this app requires
anchors = new int[]{ OpenGL ES 3.1. -->
10,13, 16,30, 33,23, 30,61, 62,45, <uses-feature
59,119, 116,90, 156,198, 373,326 android:glEsVersion="0x00030001"
}; android:required="true" />
}
else if (modelFilename.equals("best- <uses-sdk />
fp16.tflite")) {
labelFilename = <uses-permission
"file:///android_asset/customclasses.txt"; android:name="android.permission.CAMERA"
isQuantized = false; />
inputSize = 416;
output_width = new int[]{40, 20, 10}; <uses-feature
masks = new int[][]{{0, 1, 2}, {3, 4, 5}, android:name="android.hardware.camera" />
{6, 7, 8}}; <uses-feature
anchors = new int[]{ android:name="android.hardware.camera.aut
10,13, 16,30, 33,23, 30,61, 62,45, ofocus" />
59,119, 116,90, 156,198, 373,326 <uses-permission
}; android:name="android.permission.WRITE_EX
} TERNAL_STORAGE"/>
else if (modelFilename.equals("yolov5s- <uses-permission
int8.tflite")) { android:name="android.permission.READ_EX
94
TERNAL_STORAGE"/> import android.graphics.Canvas;
<uses-permission import android.graphics.Color;
android:name="android.permission.INTERNET import android.graphics.Matrix;
"/> import android.graphics.Paint;
import android.graphics.RectF;
<application import android.os.Bundle;
android:allowBackup="false" import android.os.Handler;
android:icon="@mipmap/ic_launcher" import android.widget.Button;
import android.widget.ImageView;
android:label="@string/tfe_od_app_name" import android.widget.Toast;
android:roundIcon="@mipmap/ic_launcher_r import
ound" org.tensorflow.lite.examples.detection.custo
android:supportsRtl="true" mview.OverlayView;
import
android:theme="@style/AppTheme.ObjectDe org.tensorflow.lite.examples.detection.env.Im
tection" ageUtils;
android:hardwareAccelerated="true" import
android:debuggable="true" org.tensorflow.lite.examples.detection.env.Lo
android:installLocation="internalOnly"> gger;
import
<activity org.tensorflow.lite.examples.detection.env.Ut
android:name=".DetectorActivity" ils;
import
android:label="@string/tfe_od_app_name" org.tensorflow.lite.examples.detection.tflite.C
android:screenOrientation="portrait"> lassifier;
<intent-filter> import
<action org.tensorflow.lite.examples.detection.tflite.Y
android:name="android.intent.action.MAIN" oloV5Classifier;
/> import
<category org.tensorflow.lite.examples.detection.trackin
android:name="android.intent.category.LAUN g.MultiBoxTracker;
CHER" />
</intent-filter> import java.io.IOException;
</activity> import java.util.LinkedList;
import java.util.List;
</application>
</manifest> public class MainActivity extends
AppCompatActivity {
95
findViewById(R.id.imageView); public static final int
TF_OD_API_INPUT_SIZE = 640;
cameraButton.setOnClickListener(v ->
startActivity(new Intent(MainActivity.this, private static final boolean
DetectorActivity.class))); TF_OD_API_IS_QUANTIZED = false;
96
findViewById(R.id.tracking_overlay); cropToFrameTransform.mapRect(location);
trackingOverlay.addCallback( //
canvas -> tracker.draw(canvas)); // result.setLocation(location);
// mappedRecognitions.add(result);
}
tracker.setFrameConfiguration(TF_OD_API_IN }
PUT_SIZE, TF_OD_API_INPUT_SIZE, //
sensorOrientation); tracker.trackResults(mappedRecognitions,
new Random().nextInt());
try { // trackingOverlay.postInvalidate();
detector = imageView.setImageBitmap(bitmap);
YoloV5Classifier.create( }
getAssets(), }
TF_OD_API_MODEL_FILE,
TF_OD_API_LABELS_FILE,
TF_OD_API_IS_QUANTIZED,
TF_OD_API_INPUT_SIZE);
} catch (final IOException e) {
e.printStackTrace();
LOGGER.e(e, "Exception initializing
classifier!");
Toast toast =
Toast.makeText(
getApplicationContext(),
"Classifier could not be initialized",
Toast.LENGTH_SHORT);
toast.show();
finish();
}
}
final List<Classifier.Recognition>
mappedRecognitions =
new
LinkedList<Classifier.Recognition>();
97
CURRICULUM VITAE
SKILLS
• Time Management and Planning Skills
• Knows and experienced drafting and designing electronics plan using
AutoCAD
• Has basic knowledge in Structured Cabling, FDAS System and Surveillance
Security Camera
• Teamwork, Critical Thinking, Entrepreneurship, Research, Creative
• Proficient at Microsoft Applications (Excel, Word, Powerpoint)
• Strong attention to details
• Leadership Skills
EDUCATION
• University of Science and Technology of Southern Philippines
Bachelor of Science in Electronics Engineering | 2019-Present
C.M. Recto Avenue Lapasan, Cagayan de Oro City 9000
• University of Science and Technology of Southern Philippines
Senior High School (STEM) | 2017-2019
C.M. Recto Avenue Lapasan, Cagayan de Oro City 9000
• Don Mariano Canoy Colleges
Junior High School | 2013-2017
Eagle St., Kauswagan, Cagayan de Oro City 9000
EXPERIENCES
• Intern | 2022
Civil Aviation Authority of the Philippines
• Auditor | 2020-2021
Junior Institute of Electronics Engineers of the Philippines – USTP
• Treasurer | 2021-2022
Junior Institute of Electronics Engineers of the Philippines – USTP
• Auditor | 2022-Present
Junior Institute of Electronics Engineers of the Philippines – NMS
REFERENCES
Availability upon request
98