0% found this document useful (0 votes)
35 views4 pages

Object Motion Tracking Based On Color Detection For Android Devices

This document discusses an object motion tracking system using color detection on an Android device. It uses the OpenCV library within an Android application to process camera images and detect an object's color. The detected object's position is used to control a robot car's movement and track the object. The system was able to reliably detect colors and smoothly track objects.

Uploaded by

Magesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views4 pages

Object Motion Tracking Based On Color Detection For Android Devices

This document discusses an object motion tracking system using color detection on an Android device. It uses the OpenCV library within an Android application to process camera images and detect an object's color. The detected object's position is used to control a robot car's movement and track the object. The system was able to reliably detect colors and smoothly track objects.

Uploaded by

Magesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

World Academy of Science, Engineering and Technology

International Journal of Computer and Information Engineering


Vol:9, No:4, 2015

Object Motion Tracking Based On Color Detection


for Android Devices
Zacharenia I. Garofalaki, John T. Amorginos, John N. Ellinas

 respond to medium complexity computer vision algorithms.


Abstract—This paper presents the development of a robot car One of the most used libraries for image processing and
that can track the motion of an object by detecting its color through computer vision algorithms implementation is the Open source
an Android device. The employed computer vision algorithm uses the Computer Vision library (OpenCV), which has been selected
OpenCV library, which is embedded into an Android application of a to be part of the presented Android application [7].
smartphone, for manipulating the captured image of the object. The The proposed work has been developed on a robot car
captured image of the object is subjected to color conversion and is
which is equipped with hardware for the motion and an
International Science Index, Computer and Information Engineering Vol:9, No:4, 2015 waset.org/Publication/10001450

transformed to a binary image for further processing after color


filtering. The desired object is clearly determined after removing Android device that runs an application for object’s color
pixel noise by applying image morphology operations and contour detection and a motion tracking algorithm. The object’s
definition. Finally, the area and the center of the object are captured image is subjected to a number of different image
determined so that object’s motion to be tracked. The smartphone processing steps, with OpenCV functions, that finally result in
application has been placed on a robot car and transmits by Bluetooth a blob of the object with an estimate of its area and its center
to an Arduino assembly the motion directives so that to follow
with respect to mobile screen coordinates.
objects of a specified color. The experimental evaluation of the
proposed algorithm shows reliable color detection and smooth The tracking algorithm estimates the relative blob position
tracking characteristics. with respect to screen center taking into account the Android
device orientation. However, the car movement disorders,
Keywords—Android, Arduino Uno, Image processing, Object towards left or right, that are created by small deviations
motion detection, OpenCV library. around the center of the screen may be alleviated by
separating screen into three vertical zones and checking if
I. INTRODUCTION object’s blob crosses the borders. The width of the middle
zone may be trimmed so that the forward movement to be
O NE of the most important and challenging tasks of
computer vision is object detection and motion tracking.
Numerous online or offline applications have been developed
consistent with the motion of the object.
The following sections describe the used hardware and the
for objects surveillance, traffic control, security environments, image processing steps for object detection and motion
medical image processing, augmented reality, etc. [1]-[3]. A tracking.
review of tracking algorithms and their classification to
different categories is presented in Alper Yilmaz’s paper [4]. II. PROPOSED WORK
Among them, the low computational complexity Kernel-based A. Overview
tracking is a localization method based on the maximization of
The proposed work concerns the development of a robot car
a similarity measure like the color of an object [5], [6]. The
that can track and follow colored moving objects with an
implemented kernel-based tracker, which is a typical scheme
Android mobile device as Fig. 1 illustrates. The chassis
for object detection from an image, is based on color tracking,
consists of two 20x15 cm plastic panels on top and bottom
range thresholding and contour detection techniques.
spaced by separator screws. The four wheels are 66 mm
However, the challenge for the various approaches is to
diameter rubber tires, which are turned by four dc motors,
discriminate objects from their background and the success of
with a reduction rate 48:1 and average speed 190 rpm. The
most of the presented methods depends on the color difference
four wheel robot car is driven by an Arduino Uno board, a
between them.
motor shield that can drive four dc motors and a Bluetooth
Mobile devices such as smartphones and tablets with
device that have been placed on the top of the chassis. Next to
Android operating systems can be integrated into robotic
applications and used to detect and track objects. Today’s them, an Android device has been plugged in the front side of
devices have enough processing strength and can sufficiently the car and communicates with Bluetooth to the rest hardware.
B. Arduino Application
Z. I. Garofalaki and J. T. Amorginos are with the Department of Electronic The motion of the robot car is controlled by an Arduino
Computer Systems, Piraeus University of Applied Sciences, 250 P. Ralli & Uno board with a motor shield that can drive four independent
Thivon, 12244 Egaleo, Greece (e-mail: rania@ teipir.gr, dc motors. These boards can communicate with an Android
[email protected]).
J. N. Ellinas is with the Department of Electronic Computer Systems,
device by a Bluetooth module with a baud rate of 57.600 bps.
Piraeus University of Applied Sciences, 250 P. Ralli & Thivon, 12244 Egaleo, The Arduino’s software provides motion in four directions
Greece (phone: +302105381208; fax: +302105381436; e-mail: after receiving the first character of the words Forward,
[email protected]).

International Scholarly and Scientific Research & Innovation 9(4) 2015 999 scholar.waset.org/1307-6892/10001450
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:9, No:4, 2015

Backward, Left, Right and can stop motion with character “S”. filtering, the object that has the pre-defined color resembles
In the proposed work optical encoders or Hall effect sensors like a group of white pixels, Fig. 3 (b). However, there are a
for wheel synchronization are not used, as this was beyond the lot of white pixels dispersed across the frame, which are pixel
purpose of the project. Future work would incorporate noise.
encoders and a PID algorithm for precise movement of the Usually, the removal of pixel noise is performed by erosion
robot car. and dilation morphological operations with a structuring
element that may be of suitable size in order to eliminate the
noise but to leave the object intact, Fig. 3 (c). The resulting
object blob, which is a group of white pixels that form a
coherent space, is analyzed for determining the largest
contour, which is assumed to be the desired object. The largest
contour is shown with a white rectangle that bounds the
desired object whereas other objects of the same color are
ignored. This is a simplistic and rough approach but with low
computational complexity. The alternative and more precise
International Science Index, Computer and Information Engineering Vol:9, No:4, 2015 waset.org/Publication/10001450

way is the implementation of a feature extraction algorithm


that is far more complicated than the proposed method.

Fig. 1 Robot car for object and motion tracking


C. Android Application
The Android device runs an application that has been
developed in the Android development suite and incorporates
the OpenCV 2.4.9 API. The selection of this Java API to
handle vision algorithms, instead of developing native code, is
for simplicity reasons although is far more computational
speed expensive. The basic application uses the abstract class
SampleCvViewBase.java and the image processing steps are
embedded in a subclass of it. All the following processing
manipulations are functions of OpenCV and they have been
selected for efficient results with low computational
complexity.
Fig. 2 Block diagram for object tracking by color detection
D.Object Detection Procedure
Object detection is a kernel-based method that tracks an
object from its color that has been pre-specified by the user.
The Android device captures a scene by its back camera and
the resulting image frame is subjected to various image
processing steps in order to isolate the object with the pre-
determined color. The image processing steps are illustrated in
the following block diagram, Fig. 2. The Android device
continuously captures an image, as Fig. 3 (a) illustrates, and
converts the default RGBA color space into HSV because
colors are limited to one variable instead of three and this
facilitates image processing. The conversion of the colored
image to a binary image is performed by color filtering using (a)
thresholds for the three components of HSV color space. After

International Scholarly and Scientific Research & Innovation 9(4) 2015 1000 scholar.waset.org/1307-6892/10001450
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:9, No:4, 2015

rectangle dimensions or area calculation, which facilitates the


implementation of the remaining procedure for motion
tracking. Fig. 4 illustrates the bounded object with the
coordinates of its upper-left corner and the size of the
bounding rectangle.
The easiest way for motion tracking is to determine if the
object’s center is left or right with respect to the middle of the
screen when Android device is in landscape mode. However,
for small object movements about the middle of the screen, the
application sends messages for left or right motion and the
robot car moves continuously. The mitigation of this abnormal
(b) behavior can be obtained by dividing the mobile device’s
screen into three zones where the middle zone is used as a
buffer zone. If the center of the object is within the middle
zone, the car is moving forward whereas if it crosses the zone
International Science Index, Computer and Information Engineering Vol:9, No:4, 2015 waset.org/Publication/10001450

to the left or right, the device sends a message to the robot car
in order to turn left or right respectively. The area and the
dimensions of the bounding rectangle may be evaluated by the
Open CV’s functions and therefore the relative position of the
object with respect to smartphone’s view may be determined.
Fig. 5 shows smartphone’s view with the bounding rectangle
of the tracked object and the estimated quantities. The width
and height of the view are represented by W and H, the width
(c) and height of the object are w and h and the coordinates of the
top-left corner of the object are x and y respectively.
The view is divided into three equal zones, as shown in Fig.
5 with the two imaginary lines P1 and P2, and the direction of
the robot car’s motion is defined by the determination of the
object’s relative position. This is estimated by the following
logical statement, where is the middle of the object’s
boundaries. The width of the middle zone may be trimmed so
that the motion of the robot car to be more consistent to the
object’s movement. It should not be too small because the car
will turn left or right very easily and it should not be very
large because the car will mainly move forward losing the
(d) object.
The approaching speed of the car to the object may be
Fig. 3 Image processing steps: (a) Frame of the captured image; (b) stable or vary with inverse proportion to the area of the white
Binary image after color filtering; (c) Binary image after pixel noise
rectangle and the car may stop when the area overcomes a
removal; (d) Object tracking after largest contour evaluation
threshold value.

(1)
2 3
2∙
(2)
2 3
2∙
(3)
3 2 3

Fig. 4 Object is bounded by a white rectangle for motion tracking


E. Object Motion Tracking Procedure
After the above procedure of the largest contour
determination, the resulting image shows the largest object
bounded by a white rectangle. OpenCV provides functions for

International Scholarly and Scientific Research & Innovation 9(4) 2015 1001 scholar.waset.org/1307-6892/10001450
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:9, No:4, 2015

[6] D. Comanciu, V. Ramesh, P. Meer, “Kernel-based object tracking”,


IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 25,
2003, pp. 564–575.
[7] http://opencv.org/platforms/android.html

Zacharenia I. Garofalaki received the B.Sc. degree in


Electronic Computer Systems Engineering, School of
Engineering, from Piraeus University of Applied Sciences,
Greece, in 2000. She is currently pursuing her M.Sc. degree
in “Applied Information Systems” at the Piraeus University
of Applied Sciences, Department of Electronic Computer
Systems Engineering. Her research interests are in object oriented
programming, digital image processing and embedded systems.

John T. Amorginos received the B.Sc. degree in Electrical


Engineering, from Piraeus University of Applied Sciences,
Greece, in 2009, where he is currently a research associate.
He is currently pursuing his M.Sc. degree in “Applied
International Science Index, Computer and Information Engineering Vol:9, No:4, 2015 waset.org/Publication/10001450

Information Systems” at the Piraeus University of Applied


Fig. 5 Determination of the relative object’s position Sciences, Department of Electronic Computer Systems
Engineering. His research interests are in computer architecture and
mechatronic systems.
III. CONCLUSION
This paper describes the development of a four wheeled John N. Ellinas graduated in Electrical and Electronic
Engineering from the University of Sheffield, Sheffield,
robot car moving by an Arduino Uno microcontroller board England, in 1977 and received the M.Sc. degree in
with a motor shield and a Bluetooth module, controlled by an Telecommunications from the University of Sheffield, in
Android device that is running an application that can track 1978. He also received the Ph.D. degree in informatics and
telecommunications from the University of Athens, Department of
moving objects of a pre-specified color. As a first step, the Informatics and Telecommunications, in 2005. Since 1983, he has been with
Android device captures an image that is subjected to various the Department of Electronic Computer Systems, Piraeus University of
image processing manipulations, such as color space Applied Sciences in Greece, where he is a Professor.
His main interests are in the field of embedded computer systems and
conversion, color filtering, noise removal and largest contour digital image processing. His research activity is focused on microcontroller
estimation, resulting in a blob for an object with a predefined systems design, image and video coding, image restoration and watermarking.
color. The object of the largest contour is bounded by a white
rectangle and its center is determined in order to track its
motion and inform the robot car about the direction of
movement. The motion of the robot car to the correct
direction, in order to follow object’s motion, can be more
accurate if the landscape screen of the smartphone device is
divided into three vertical zones and the decision is taken
when the center of the object’s bounding rectangle crosses a
zone.

ACKNOWLEDGMENT
This paper is part of a Master's degree thesis project for the
Department of Electronic Computer Systems Engineering,
which is funded by the Research Committee of Piraeus
University of Applied Sciences.

REFERENCES
[1] Tang Sze Ling et al, “Colour-based Object Tracking in Surveillance
Application”, Proceedings of the International MultiConference of
Engineers and Computer Scientists, vol. I, Hong Kong, March 2009.
[2] J.F. Engelberger, “Health-care Robotics Goes Commercial: The
Helpmate Experience”, Robotics, vol. 11, 1993, pp. 517-523.
[3] L. Davis, V. Philomin and R. Duraiswami, “Tracking Humans from a
Moving Platform”, IEEE Computer Society Proceedings of the
International Conference on Pattern Recognition, vol. 4, 2000, pp. 4171.
[4] O. Javed and M.S. Yilmaz, “Object Tracking: A survey”, ACM Journal
of Computing Surveys, vol. 38, no. 4, article 13, 2006.
[5] D. Comanciu, P. Meer, “Mean shift: A robust approach toward feature
space analysis”, IEEE Transactions on Pattern Analysis Machine
Intelligence, vol. 24, no. 5, 2002, pp. 603–619.

International Scholarly and Scientific Research & Innovation 9(4) 2015 1002 scholar.waset.org/1307-6892/10001450

You might also like