0% found this document useful (0 votes)
15 views

Yolo Algorithm

Uploaded by

abdul rahman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Yolo Algorithm

Uploaded by

abdul rahman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Object Detection Using YOLO Algorithm

ABSTRACT

The Objective is to detect of objects using You Only Look Once (YOLO)
approach. This method has several advantages as compared to other object
detection algorithms. In other algorithms like Convolutional Neural Network,
Fast Convolutional Neural Network the algorithm will not look at the image
completely but in YOLO the algorithm looks the image completely by
predicting the bounding boxes using convolutional network and the class
probabilities for these boxes and detects the image faster as compared to
other algorithms.This project aims to harness YOLO’s efficiency and accuracy
to address the need for timely and precise object detection in various
scenarios Keywords: Convolutional Neural Network, Fast-Convolutional
Neural Network, Bounding Boxes, YOLO.

The result of this concept is a C / C ++ based GNU / Linux system that can
detect and keep track of objects by reading pixel values of frames captured
by the camera module. The app transmits certain useful information, such
as links and sizes, to other computers on the network that send the
appropriate question.

i
INTRODUCTION

The main aim of Object Detection is to detect all the objects that are present

in that particular image. The important thing in object detection is to make

system detect the object by making it to learn how to do that which is

nothing but training the dataset and allowing system to detect the object by

itself. The searching or recognition process in a real time scenario is so

tough. So far, no effective solution has been found. Though there is a lot of

work going on, the methods developed till now aren’t efficient, require long

training time, are not suitable for real time application, and are not scalable

to large numbers of classes. Object detection is relatively simpler if the

machine is looking for detecting one particular object. However, recognizing

all the objects inherently requires the skill to differentiate one object from

the other, though they may be of the same type. Such a problem is very

difficult for machines, if they don’t know about the various possibilities of

objects. When anyone goes to any hospital or to a Traffic controlling monitor

or self driving cars all they do is to detect the objects for the best

performances. For example in a hospital when the doctor sees the scan

report they will be able to tell whether there is a virus in our body or not. So

we decided to make a REAL TIME OBJECT DETECTION System. Sometime

it’s very tough to negotiate between different Object detection techniques.

1
Object Detection Conception:

Picture isolation is one of the many energizing utilizations of convolutional


neural organizations. Beside straightforward picture grouping, there are
many intriguing issues with regards to PC seeing, where object identification
is perhaps the most fascinating. It is regularly connected with self-driving
vehicles where frameworks incorporate PC vision, LIDAR, and different
advances to deliver multi-sided street portrayal with all partners.

Article discovery is generally utilized in video reconnaissance, particularly in


group observation to forestall fear based oppressor assaults, socioeconomics, or to
examine client data by means of markets.

Object Detect Misconceptions

The training-based analysis approach often exceeds methods of legal


impetus, if the training and test data come from the same source. However, if
this is not the case the training-based compound could show poor
performance, which could be worse than the performance of a single
detector used.

2
OBJECTIVE

The fundamental reason for existing is to arrange the C/C ++ program, with the
assistance of OpenCV libraries, which can be utilized in UNIX-based inserted
programs, specifically YOLO to:

• Find hued bulbs in the live camera feed.

• If you think these masses are genuine (certain conditions are met), keep them in
memory to monitor their connections and different properties

• Attempts to guarantee the right and direct development of these items

• Sending valuable data about these things proceeds as datagrams (utilizing network
attachments attached to the UDP convention) for any product application
applications.

3
METHODOLOGY

This project is carried out to analyse the performance of Detection with


YOLO and to record its properties under various test conditions. The condition
will vary by changing the following parameters.

• Colour of Image.

• Quantity of Image.

• Various Image distances.

• Different Image size.

The above-mentioned conditions are tested in this setup to determine


the performance of Object Detection using YOLO. The following
properties of YOLO are measured with the practical experiment of the
device setup. They are
• Efficiency

• Security

• Capacity

1
EFFICIENCY:
YOLO separates each image into an S x S grid and each grid predicts N
binding boxes with confidence. Confidence indicates the accuracy of the
binding box and whether the binding box contains something (other than a
section). YOLO also predicts the separation phase of each box in each class
in training. You can combine both classes to calculate the probability that
each category is present in the predictive box.

Therefore, the total SxSxN boxes are predicted. However, most of these boxes have
low confidence scores and if we limit it to 30% confidence

SECURITY:
The extraordinary preferred position of utilizing YOLO is its brilliant speed -
it's inconceivably quick and can handle 45 edges for every second. YOLO
additionally comprehends the portrayal of a typical article. This is extraordinary
compared to other procurement calculations and has demonstrated comparative
execution in R-CNN calculations.
CAPACITY:
The greatest favorable position of utilizing YOLO is its magnificent speed –
it's inconceivably quick and can handle 45 edges for each second. YOLO likewise
comprehends summed up item portrayal. This is probably the best calculation for
object recognition and has demonstrated a nearly comparative execution to the R-
CNN calculations
AVAILABILITY:
YOLO works under Regression system. So Images were available all over
across the globe which makes the network of whole world easier. It is not only
easy to use but also provides Images by using the simple and normal light holder.

1
PROCEDURE:

• The Image signs is fed into the YOLO which has a main source. The
Image source will vary its intensity in order to modulate the Image
according to the Image signs.

• The signs then propagate through the open source in to reach the other side
of the system.

 Bring a input as a image of particular shape

 It Passes of this image to the convolutional neural network (CNN), and which
is going to returns an dimensional output.

1
CHAPTER IV

WORKING

YOLO sees the whole picture during training and test time and therefore fully
integrates the content of the classes and their appearance. YOLO learns automated
presentations of objects so that when they are trained in natural images and tested
in art, the algorithm surpasses other advanced acquisition methods.

Now that we have understood why YOLO is such a viable foundation, let’s get
into its practical reality. In this section, I mentioned the steps followed by YOLO to
find items in certain images.

 The YOLO begins with taking a input image:

Figure 3.1

1
 The frame work further divides an input into different grids :

Figure 3.2
• Picture characterization and limitation applied to every lattice. YOLO
predicts responsibility boxes and their comparing object learning
openings (if accessible, obviously).

Truth be told, right? How about we separate each progression so we can get a more
clear comprehension of what we have recently perused.

We need to move the marked data to the model to shape it. Assume we split the
picture into a matrix size 3 X 3 and there is a sum of 3 classes that we need things
to be isolated into. Assume the Pedestrian, Car, and Motorcycle classes individually.
In this manner, for every cell of the framework,

1
Table 3.1

where,
the pc in table specifies is an object exists in position or not
the bx and by and bh and bw also specify a binding box if occurring an object
and also for c1 and c2 and c3 also represent the classes. therefore, if an input is a
car suppose, then c2 will be value of 1 and c1 and c3 will be going to 0, and go on.
Suppose if select the 1st box in above example:

Figure 3.3
and there was no article in this above matrix, pc will be 0 and the point y mark for this
framework will be:

Table 3.2

1
Figure 3.4
Before we compose the name y of this framework, it is significant that we initially
see how YOLO decides if there truly is something in the matrix. In the image above,
there are two items (two vehicles), so YOLO will take a point between the two
articles and these articles will be given to the framework that contains the point
between the two articles. The mark of the focal point of the left lattice and the
vehicle will be:

Table 3.3
Since there is something in this lattice, the pc will be equivalent to In this manner,
for every one of the 9 lattices, we will have a most extreme yield vector. This yield
will have a 3 X 3 X 8 shape..

So now we have the establishment picture and it compares to the predetermined


vector.

1
We will lead forward and in reverse transmissions to prepare our model.
During the test stage, we move the picture to the model and push it ahead
until we get the outcome y. To keep things basic, I clarified this utilizing a 3 X
3 framework here, however normally in true circumstances we take bigger
lattices. Regardless of whether a thing broadens more than one matrix, it
might be allotted to one lattice where its main issue is. We can lessen the
odds of different items showing up on a similar network cell by expanding
the quantity of lattices
How to Encode Bounding Boxes?

As I referenced before points are determined comparative with the network cell we
are managing. We should comprehend this idea as a visual cue. Consider the correct
focus framework containing the vehicle:

Figure 3.5
Therefore, points will be calculated in relation to this matrix. Then it has:

Table 3.4
Then point pc = 1 and also there is something in matrix and it is someone’s car.
Now,

1
Figure 3.6

Figure 3.7
bh the proportion of the tallness of the coupling box red box in the model above to
the stature of the relating lattice cell, which to us is around 0.9. Along these lines,
bh = 0.9. bw the normal width of the associating box and the width of the framework
cell. Thusly, bw = 0.5 (roughly). The mark for this matrix will be:

Table 3.5
Note here that bx and by will stay a distance somewhere in the range of 0 and 1 as
the middle point will consistently exist in the lattice. While bh and bw can be
mutiple if the size of the coupling box is more prominent than the size of the matrix.

2
In the following segment, we will take a gander at certain thoughts that can help us
make this calculation work better

Crossing point over Union and Non-Max Suppression

Here are some provocative feeds - how might we decide whether an anticipated box
gives us a positive (or negative) impact? That is the place where the Intersection
over Union comes in. Figures the connection between the genuine restricting box
and the anticipated restricting box. Consider the genuine and prescient
responsibility boxes of a vehicle as demonstrated as follows:

Figure 3.8

Here, the red box is a genuine restricting box and the blue box is the thing that is
anticipated. How might we decide whether a decent expectation or not? The IoU, or
Union Crossroads, will compute the intersection at the intersection of these two
boxes. That spot will be:

2
Figure 3.9

IoU = Meeting place/Union spot, for example

IoU = Yellow box area/Green box area

In the event that the IoU is above 0.5, we can say that the expectation is adequately
exact. 0.5 is the constraint of the contention we have taken here, however can be
changed relying upon your particular issue. As a matter of course, the bigger the
breaking point, the better the expectation.
There is just a single method to improve your YOLO yield - Non-Max Pressure.

Perhaps the most widely recognized issues with object identification calculations is
that as opposed to getting an item once, they can get it again and again. Consider
the image underneath:

Figure 3.10

Here, vehicles are recognized more than once. The Non-Max


Suppression Process tidies up this so we can get a solitary identification
of every thing. We should perceive how this functions.

2
1. It beginnings by taking a gander at the open doors related with every
obtaining and afterward takes the greatest one. In the figure above, 0.9 is
likely, so a crate with a likelihood of 0.9 will be chosen first:

Figure 3.11
2. Now, take a gander at all the other boxes in the image. Boxes with high IoU
and current box are compacted. Hence, boxes with a likelihood of 0.6 and 0.7
will be squeezed in our model:

Figure 3.12

3. After the containers are squeezed, select the following box of all the cases with
the most elevated possibility, it is 0.7 for us:

2
Figure 3.13
4. It will likewise check the IoU of this container and the excess boxes and press
the cases with the top IoU:

5. We recurrent these means until all the containers have been chosen or squeezed
and get the last restricting boxes:

Here is the thing that Non-Max Suppression implies. We take boxes with
extraordinary potential and press boxes extremely near non-max openings. Let us
rapidly sum up the focuses we have found in this segment with respect to the non-
Max concealment calculation:

Dispose of all cases with not exactly or equivalent to the edge recently depicted
(state, 0.5)

2
For the leftover boxes:

Select the container with the most elevated likelihood and accept that as the yield
forecast

Dispose of some other box with IoU bigger than the breaking point and the yield
box from the above advance

Rehash stage 2 until all the crates are viewed as an expectation of release or disposed
of

There is another way we can improve the presentation of the YOLO calculation -
we should investigate!

Anchor Boxes
We've seen that every framework can highlight just a single thing. In any case,
imagine a scenario in which there are an excessive number of things in a single
matrix. That can regularly occur. Furthermore, that drives us to moor boxes. Think
about the accompanying picture, separated by a 3 X 3 network:

Recollect how we dole out an item to the framework? We supplanted the item and
took a gander at its area, allotting the item to the relating network. In the model

2
over, the midpoint of the two articles lies in a similar framework. This is the way
genuine bundling boxes will resemble:

We will just get one of the two boxes, either via vehicle or by individual. In
any case, in the event that we use stop boxes, we can pull out both boxes! How
would we do this? To begin with, we depict ahead of time two distinct shapes called
suspension boxes or anchor box shapes. Presently, for every network, rather than
one yield, we will have two yields. We can generally build the quantity of anchor
boxes. I have taken two here to make the idea more obvious:

This course is the thing that the YOLO mark without anchor boxes resembles:

2
What do you figure the mark y will be on the off chance that we have two
anchor boxes? I need you to pause for a minute to think about this prior to
perusing further. Did I get it? The name will be:

The initial 8 lines are for the anchor box and the excess 8 are for the anchor box 2.
The items are given suspension boxes dependent on the comparability of the
coupling boxes and the state of the anchor box. Since the plan of the anchor 1 box
is like the coupling box of an individual, the last will be given an anchor 1 box and
the vehicle will be given an anchor box 2. The yield for this situation, rather than 3
X 3 X 8 (utilizing a 3 X 3 matrix and segments 3), it will be 3 X 3 X 16 (since we
utilize 2 anchors).

In this way, for every framework, we can discover at least two things dependent on
the quantity of anchors. How about we sum up all the thoughts we have composed
up until now and connection them to the YOLO system.

2
ADVANTAGES

 Less data retention is required, difficulty decreases as feature vectors are


generated by open source

 This algorithm has better results when we use it in finding and tracking
targets. This algorithm is ready for real-time tracking.

 It is a duplicate algorithm that measures the parameters of a combination and


at the same time selects the number of objects per pixel

 Algorithm is used for different image sequences and gives better performance

 By combining pixel ingenuity and regional ingenuity, a moving object can be


found in online video. The accuracy and efficiency of the proposed algorithm
are high.

 Object identification is inseparably connected to other comparable PC


acknowledgment strategies, for example, picture acknowledgment and
picture order, as it encourages us to comprehend and examine scenes in
pictures or video.

2
SOURCE CODE
import numpy as np

import cv2 as cv

import subprocess

import time

import os

from ObjectDetection import detectObject, displayImage

import sys

from gtts import gTTS

from playsound import playsound

import os

from threading import Thread

global class_labels

global cnn_model

global cnn_layer_names

playcount = 0

def deleteDirectory():

filelist = [ f for f in os.listdir('play') if f.endswith(".mp3") ]

for f in filelist:
os.remove(os.path.join('play', f))
def speak(data, playcount):

class PlayThread(Thread):

def __init__(self, data, playcount):

Thread.__init__(self)

self.data = data

self.playcount = playcount

def run(self):

t1 = gTTS(text=self.data, lang='en', slow=False)

t1.save("play/"+str(self.playcount)+".mp3")

playsound("play/"+str(self.playcount)+".mp3")

newthread = PlayThread(data, playcount)

newthread.start()

def loadLibraries(): #function to load yolov3 model weight and

class labels

global class_labels

global cnn_model

global cnn_layer_names

class_labels = open('model/yolov3-

labels').read().strip().split('\n') #reading labels from yolov3 model

cnn_model =

cv.dnn.readNetFromDarknet('model/yolov3.cfg',

'model/yolov3.weights') #reading model


cnn_layer_names = cnn_model.getLayerNames() #getting layers

from cnn model

cnn_layer_names = [cnn_layer_names[i[0] - 1] for i in

cnn_model.getUnconnectedOutLayers()] #assigning all layers

def detectFromImage(imagename): #function to detect object

from images

#random colors to assign unique color to each label

label_colors = np.random.randint(0,255,size=

(len(class_labels),3),dtype='uint8')

try:

image = cv.imread(imagename) #image reading

image_height, image_width = image.shape[:2]

#converting image to two dimensional array

except:

raise 'Invalid image path'

finally:

image, _, _, _, _ = detectObject(cnn_model,

cnn_layer_names, image_height, image_width, image,

label_colors, class_labels)#calling detection function

displayImage(image)#display image with detected

objects label
def detectFromVideo(): #function to read objects from video global

playcount

#random colors to assign unique color to each label

label_colors = np.random.randint(0,255,size=

(len(class_labels),3),dtype='uint8')

try:

video = cv.VideoCapture(0)

frame_height, frame_width = None, None #reading video from

given path

except:

raise 'Unable to load video'

finally:

while True:

frame_grabbed, frames = video.read() #taking each frame

from videoz

if not frame_grabbed: #condition to check whether video

loaded or not

break

if frame_width is None or frame_height is None:

frame_height, frame_width = frames.shape[:2]


#detecting object from frame

frames, cls, _, _, _, _ = detectObject(cnn_model, cnn_layer_names,

frame_height, frame_width, frames, label_colors, class_labels)

data = ""

if len(cls) > 0:

for i in range(len(cls)):

data+=cls[i]+","

cv.imshow("Detected Objects", frames)

if len(cls) > 0:

speak("Detected Objects = "+data, playcount)

playcount = playcount + 1

if cv.waitKey(5) & 0xFF == ord('q'):

break

if __name__ == '__main__':

loadLibraries()

deleteDirectory()

detectFromVideo()
Object detection code

import numpy as np
import argparse
import cv2 as cv
import subprocess
import time
import os
def detectObject(CNNnet, total_layer_names, image_height, image_width,
image, name_colors, class_labels,
Boundingboxes=None, confidence_value=None, class_ids=None,
ids=None, detect=True):
if detect:
blob_object = cv.dnn.blobFromImage(image,1/255.0,(416,
416),swapRB=True,crop=False)
CNNnet.setInput(blob_object)
cnn_outs_layer = CNNnet.forward(total_layer_names)
Boundingboxes, confidence_value, class_ids =
listBoundingBoxes(cnn_outs_layer, image_height, image_width, 0.5)
ids = cv.dnn.NMSBoxes(Boundingboxes, confidence_value, 0.5, 0.3)
if Boundingboxes is None or confidence_value is None or ids is None or
class_ids is None:
raise '[ERROR] unable to draw boxes.'
image, cls = labelsBoundingBoxes(image, Boundingboxes,
confidence_value, class_ids, ids, name_colors, class_labels)

return image, cls, Boundingboxes, confidence_value, class_ids, ids

def labelsBoundingBoxes(image, Boundingbox, conf_thr, classID, ids,


color_names, predicted_labels):
cls = []
if len(ids) > 0:
for i in ids.flatten():
# draw boxes
xx, yy = Boundingbox[i][0], Boundingbox[i][1]
width, height = Boundingbox[i][2], Boundingbox[i][3]

class_color = [int(color) for color in color_names[classID[i]]]


cv.rectangle(image, (xx, yy), (xx+width, yy+height), class_color, 2)
text_label = "{}: {:4f}".format(predicted_labels[classID[i]], conf_thr[i])
cv.putText(image, text_label, (xx, yy-5), cv.FONT_HERSHEY_SIMPLEX,
0.5, class_color, 2)
cls.append(predicted_labels[classID[i]])
return image, cls

def listBoundingBoxes(image, image_height, image_width, threshold_conf):


box_array = []
confidence_array = []
class_ids_array = []

for img in image:


for obj_detection in img:
detection_scores = obj_detection[5:]
class_id = np.argmax(detection_scores)
confidence_value = detection_scores[class_id]
if confidence_value > threshold_conf:
Boundbox = obj_detection[0:4] * np.array([image_width, image_height,
image_width, image_height])
center_X, center_Y, box_width, box_height = Boundbox.astype('int')

xx = int(center_X - (box_width / 2))


yy = int(center_Y - (box_height / 2))

box_array.append([xx, yy, int(box_width), int(box_height)])


confidence_array.append(float(confidence_value))
class_ids_array.append(class_id)

return box_array, confidence_array, class_ids_array

def displayImage(image):
cv.imshow("Final Image", image)
cv.waitKey(0)
APPLICATIONS

 Tracking items
The discovery framework is used more and more for tracking objects, for example
tracking a ball during a match at a soccer world cup, tracking a cricket bat throw,
tracking someone in a video.
Object tracking has a variety of uses, some of which are surveillance and security,
traffic testing, video communication, robotic view and function.

 Counting People
Object acquisition can be further used in censuses It is used to distribute store
operations or group ratings during festivals. Often this will be a problem as people
get out of the frame quickly (similarly because people are inconsistent).

 Automatic CCTV monitoring


Monitoring is a necessary safety and observation feature. The ongoing development
of new computer monitoring needs to encourage the development of various
planned monitoring systems. However, their performance is influenced by many
factors and they are completely unreliable. This test explored the power of an
automated monitoring system to reduce the outstanding performance of a CCTV
controller in the experience of acquisition and follow-up tests.
CCTV usually works inevitably, so we need a larger memory frame to store the
the acquisition
recorded video.framework
By using we can use CCTV
electronically so that when a few items are discovered, then the record will start. By
using this we can reduce the recurring account of the same image frame, which
expands the memory performance. We can reduce the memory requirement by using
this feature.

2
 Car Detection
Car Discovery is one of the most important parts of our daily lives. As the
world moves faster and car numbers continue to increase day by day, the
discovery of a car is very important. By using the vehicle detection process
we can detect the number of the speeding vehicle or the vehicle involved
in the accident. This also allows for public safety and to reduce the number
of crimes committed by motor vehicles. Using Vehicle Detection
Technology Pixel Solutionz successfully detected vehicle speed and
vehicle number using Optical Character Recognition (OCR). With the
acquisition of the number plate, Pixel Solutionz was able to measure the
speed of the car and the oil company and we successfully upgraded the
System Alert System with a warning of collision.
 Personal Discovery
Personal detection is a must and is a critical task in any video surveillance
framework, as it provides important data for a limited understanding of video
recording. It has an obvious increase in vehicle applications due to its ability to
improve safety structures. The discovery of a person is done by the framework of
the computer vision of finding and tracking people. The discovery of a person is the
task of finding all the examples of the people present in the image, and is greatly
achieved by looking at all the places in the image, in all the possible dimensions,
and by comparing a small region to all the places with known structures or human
examples. Human discovery is often regarded as the first process in a video
surveillance pipeline and can focus on the most important thinking modules, for
example, action recognition and dynamic analysis of the situation.

3
CONCLUSION

In this paper, we proposed about YOLO algorithm for the purpose of


detecting objects using a single neural network.This algorithm is generalized,
it outperforms different strategies once generalizing from natural pictures to
different domains. The algorithm is simple to build and can be trained
directly on a complete image.Region proposal strategies limit the classifier to
a particular region. YOLO accesses to the entire image in predicting
boundaries. And also it predicts fewer false positives in background
areas.Comparing to other classifier algorithms this algorithm is much more
efficient and fastest algorithm to use in real time.

YOLO is already known for its speed, but there's always room for improving
accuracy. Future work could focus on refining the algorithm to better detect
objects in various conditions such as low light, occlusions, or different
viewpoints. While YOLO is primarily designed for object detection, integrating
it with algorithms for object tracking could enable more sophisticated
applications such as video surveillance, traffic monitoring, and crowd
analysis. Optimizing energy efficiency could extend its applicability to
battery-powered devices. Making YOLO more accurate at recognizing
objects, especially in different situations like when it's dark or when objects
are partially hidden

3
CHAPTER VI

RESULT

Picture arrangement includes anticipating the class of one item in a picture.


Confinement of an item alludes to seeing the area of at least one articles in a
picture and drawing a crate brimming with their width. Article securing
consolidates these two capacities and is restricted and isolates at least one
items in the picture.

Figure 4.1
This is the first result as of typing the command to execute the python file inside the
directory

3
figure 4.2

In this image it recognizes the multiple objects like chair, laptop, monitor with good
and fine accuracy this makes sense that it doesn’t matter how many objects are there.
The main goal is to find and detect the object individually with high accuracy.

Figure 4.3

3
Figure 4.4

Even though the image was taken from backside the recognizer that
means the YOLO algorithm that takes every side accuracy and grid
calculation so that it known for best algorithm for object detection with
good accuracy.

3
FUTURE SCOPE

In the future this project can be implemented in driving cars with high
performance Detector which is capable of collecting the data at a
higher rate, providing the user with a wider range of possibilities and
allows them to transfer the work at a relatively higher speed.

This setup can also be modified with higher intensity Image sources and
receive different types of activity face images to enable communication
between various devices in a connected environment, which will help people
to enable Machine Learning with less usage of resource with use of self-
driving.

Figure 4.5

3
REFERENCES

[1] “No Titl.” https://towardsdatascience.com/ how-to-train-your-ownobject-


detector-with-tensorflows-object-detector-api-bec72ecfe1d9. [2] A.
Sharifara, M. S. Mohd Rahim, and Y. Anisi, “A general review of human face
detection including a study of neural networks and Haar feature-based
cascade classifier in face detection,” Proc. - 2014 Int. Symp. Biometrics
Secur. Technol. ISBAST 2014, pp. 73– 78, 2015, doi:
10.1109/ISBAST.2014.7013097. [3] R. Samet and M. Tanriverdi, “Face
recognition-based mobile automatic classroom attendance management
system,” Proc. - 2017 Int. Conf. Cyberworlds, CW 2017 - Coop. with
Eurographics Assoc. Int. Fed. Inf. Process. ACM SIGGRAPH, vol. 2017-
Janua, pp. 253– 256, 2017, doi: 10.1109/CW.2017.34.
[4] Installing OpenCV on a Raspberry Pi [WWW]
http://robertcastle.com/2014/02/installingopencv-on-a-raspberry-pi/ (17.03.2015)
[5] OpenCV About. [WWW] http://opencv.org/about.html (16.03.2015)
[6] Coombs, J., Prabhu, R. OpenCV on TI’s DSP+ARM platforms: Mitigating the
[WWW]
challenges of porting OpenCV to embedded platforms.
ht tp ://w ww. e mbedded vis io n. co m/p lat inu m- me mbers /te xas - ins tr ume nts/e mb
visiontraining/documents/pages/opencv-ti%E2%80%99s-dsparm%C2%AE-plat
(21.05.2015)
[7] OpenCV API Reference : Basic Structures. [WWW]
http://docs.opencv.org/modules/core/doc/basic_structures.html (18.03.2015)
[8] OpenCV API Reference : Reading and Writing Images and Video [WWW]
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_vi
deo.html#vi deocapture (19.03.2015)
[9] OpenCV API Reference : Miscellaneous Image Transformations [WWW]
ht tp :// docs . openc v. or g/ mod ules / imgp roc/d o c/ mis ce lla ne o us _tra ns for ma t io
#cvtcolor (20.03.2015)

3
[10] OpenCV API Reference : Operations on Arrays [WWW]
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#inrange
(20.03.2015)
[11] OpenCV API Reference : Image Filtering [WWW]
http://docs.opencv.org/modules/imgproc/doc/filtering.html#erode (21.03.2015)
[12] Hue, Saturation & Value. The Characteristics of Color [WWW]
http://www.greatreality.com/color/ColorHVC.htm (24.03.2015)
[13]
Tracking Color http://opencvsrf.blogspot.com/2010/09/object-
Detection & Object [WWW]
detection-using-color- seperation.html (16.03.2015)

[14] HSL and HSV [WWW] http://en.wikipedia.org/wiki/HSL_and_HSV


(16.05.2015)
[15] OpenCV API Reference : User Interface [WWW]
http://docs.opencv.org/modules/highgui/doc/user_interface.html?highlight=waitke
y#waitkey (18.03.2015)

You might also like