VRL 1920 Project 2
VRL 1920 Project 2
BACHELOR OF ENGINEERING
IN
ELECTRONICS AND COMMUNICATION ENGINEERING
Submitted by
D.Samyuktha (316126512132) G.Santosh (316126512136)
[i]
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY AND SCIENCES
(Permanently Affiliated to AU, approved by AICTE and Accredited by NBA
&NAACwith ‘A’ Grade)Sangivalasa, Bheemili Mandal, Visakhapatnam dist.
(A.P)
CERTIFICATE
[ii]
ACKNOWLEDGEMENT
We would like to express our deep gratitude to our project guide Dr.V.Rajyalakshmi
Head of The Department, Department of Electronics and Communication Engineering,
ANITS, for her guidance with unsurpassed knowledge and for providing us with the
required facilities for the completion of the project work.
We are very much thankful to the Principal and Management, ANITS,
Sangivalasa, for their encouragement and cooperation to carry out this work.
We express our thanks to all teaching faculty of Department of ECE, whose
suggestions during reviews helped us in accomplishment of our project. We would
like to thank all non-teaching staff of the Department of ECE, ANITS for providing
great assistance in accomplishment of our project.
We would like to thank our parents, friends and classmates for their encouragement
throughout our project period. At last but not the least, we thank everyone for
supporting us directly or indirectly in completing this project successfully.
PROJECT STUDENTS
D.SAMYUKTHA (316126512132)
G.SANTOSH (316126512136)
P.SAI KAUSTHUBH (316126512108)
P.LIKITHA (316126512089)
[iii]
ABSTRACT
Paralysis is one of the biggest curse to mankind. In worst case paralysis the person could
move only his eyes. The head movement or voice based wheelchairs will not hold good in
that situation. So an eyeball movement based wheelchair would help the best for those
people. This would be more accurate when compared to other automated wheelchairs. A
method for eyeball localization is proposed for controlling wheelchair. An algorithm is
furnished with various processing steps and develops an efficient system to reduce both the
cost and the computational complexity. Primary goal was to detect eyes in real-time and also
to keep track on it. The idea is to create an Eye Monitored System which allows movement of
the patient‘s wheelchair depending on the eye movements. A patient looks directly at the
camera mounted on a head gear and is able to move in a direction just by looking in that
direction.
[iv]
CONTENTS(10)
Sl.No Name of Content Page No.
1 Certificate i
2 Certificate of Authentication ii
3 Acknowledgement iii
4 Abstract iv
5 Chapter 1: Introduction 1
1.1 Aim
1.2 Objectives
1.3 Methodology
1.4 Organization of the Project
6 Chapter 2: Literature Survey 8
2.1 Embedded Systems and Image processing
2.2 Raspberry Pi vs Orange Pi
Raspberry Pi vs Banana Pi
Raspberry Pi
GPIO connector
Programming Languages Ported to Rpi
Advantages & Disadvantages of Python
Operating system on Rpi
Haar cascade Classifiers
Harr cascade Detection in OpenCV
OpenCV
Image Processing with OpenCV
INTRODUCTION
ANITS Page - 1 -
AIM
There has been a rapid increase in the Quadriplegia prone persons with increasing population.
Several Wheel Chair Systems have been made for disabled persons. Some of the wheelchair
systems present till now are discussed below. Hand Gesture based Wheelchair System uses
MEMS Accelerometer Sensor which is attached to the hand. Then based on hand gesture, the
Wheelchair system is controlled. Voice Operated Wheelchair System uses the voice of the
user to operate Wheelchair. Head and Finger based Automated Wheelchair System uses
Accelerometer and Flex Sensor to operate Wheelchair. But all the above systems require
much of a human effort and none of these systems help people suffering from Quadriplegia
.In Quadriplegia, Paralysis is of the extreme level in which a person can only move his eyes.
In order to help such disabled persons, Eye Movement based Electronic Wheelchair System
using MATLAB came into existence.
OBJECTIVES
1. Learn how to use OpenCV with python
2. Understand the techniques of Image Processing
3. A clear idea on the architecture and working of RaspberryPi
4. The use of IR camera and picam
5. The advatages of Eye Ball Movement technique over other methods.
METHODOLOGY
A head mount camera detects the eye movement and wheelchair is moved accordingly. The
head mount camera is connected to the Laptop where a continuously running MATLAB
script processes the image and gives command to the microcontroller to control the wheels of
a Wheelchair. This system came as a boon for such people. But the constraint was that you
had to carry your laptop every time along with the Wheelchair System. That was bulky and
costly.To remove the bulkiness and costliness of the Eye Movement based Electronic
Wheelchair System, which uses MATLAB, people came up with ideas of using Raspberry Pi
to control the whole Wheelchair System. Since Raspberry Pi has its own OS and it is easily
portable, people switched to using Raspberry Pi based Wheelchair System. Although in the
existing Raspberry Pi based Wheelchair System, latency (delay in response) is the biggest
issue. Hence we have come up with a system that uses efficient algorithms for image
processing using OpenCV and reduces the latency as much as possible. OpenCV processes
ANITS Page - 2 -
the eye and by applying the two algorithms (Centroid and Threshold), movement of
wheelchair is initiated. Python is used for programming the Raspberry Pi. Shell Script is used
to continuously run the same procedure when power is supplied to Raspberry Pi through
power backups i.e. through power banks.
IRIS DETECTION
For simplicity, we have attached an IR web camera onto the handle part of the Wheelchair
that is used to detect the eye motion. Then we have designed an algorithm to track the iris
part of the eye using centroid calculation method and implemented the same in the Open CV.
Once the iris is tracked, then the threshold is set.
THRESHOLD
A very basic principle is used for the movement detection. The feature point of both the eyes
is considered as the reference. The difference between the pixel values of eye positions is
calculated by comparing current snapshot and the previous one.The minimum movement of
the eye for a valid attempt is considered as threshold. By evaluating the difference, and if the
difference is above the threshold in any direction left or right, the corresponding flag is set. If
the difference is less than the threshold value, then there is no need of movement. Sometimes
ANITS Page - 3 -
failure in detection occurs due to non-linearity. At such instances a bias can be given to the
eye, which was detected in the previous snapshot.
The method contains a Raspberry Pi in which OpenCV has been installed. IR Camera has
been interfaced with the Raspberry Pi. IR Camera is used to capture real time images of
person‘s eye and send it to continuously running OpenCV Python script. OpenCV Python
script processes the image using Iris Detection Algorithm (Centroid Algorithm) and
Threshold Algorithm. Then the command is given by the Raspberry Pi to the Motor Driver
circuit regarding the position and direction in which the wheelchair has to move. The Flow
Chart of the methodology is given below.
ANITS Page - 4 -
Fig 1.3. Flowchart for Eyeball Movement Based Wheelchair
ANITS Page - 5 -
The same has been replicated by the diagram given below.
THRESHOLD ALGORITHM
First find the length of the image captured. Along the length make two divisions using
mathematical approach as shown the figure given below. The right division is the right
threshold and the left division is the left threshold. If the Centroid position is between these
two divisions, then the movement should be in straight direction. If the Centroid position is
greater than right threshold, then initiate right movement. If the Centroid position is lesser
than left threshold, then initiate left movement.
ANITS Page - 6 -
Wheelchair Movement Control
ANITS Page - 7 -
CHAPTER 2
LITERATURE SURVEY
ANITS Page - 8 -
EMBEDDED SYSTEMS AND IMAGE PROCESSING :-
The first part of this three-part series gives a brief overview of the embedded vision and the
various components required to make it work. It also covers the installation procedure for the
OpenCV library. Modern life is incomplete without gadgets, smartphones, automated
appliances, et al. These electronic devices aide us in our daily grind, making our otherwise
mundane/hectic life a bit easier. So what controls these devices? In layman‘s language, it‘s a
small circuit with preprogrammed human logic, called an embedded system. Listed below are
some useful definitions.
Embedded system (ES): An embedded system is some combination of computer hardware
and software, either fixed in capability or programmable, that is specifically designed for a
particular function. Industrial machines, automobiles, medical equipment, cameras,
household appliances, air-planes, vending machines and toys (as well as the more obvious
cellular phone and PDA) are among the myriad possible hosts of an embedded system.
Image processing: In electrical engineering and computer science, image processing is any
form of signal processing for which the input is an image, such as a photograph or video
frame. The output of image processing may be either an image, or a set of characteristics or
parameters related to the image. Most image-processing techniques involve treating the
image as a two-dimensional signal, and applying standard signal-processing techniques to it.
In other words, it is basically the transformation of data from a still or video camera into
either a decision or a new representation. All such transformations are done to achieve some
particular goal. The input data may be a live video feed, the decision may be that a face has
been detected, and a new representation may be conversion of a colour image into a greyscale
image.
Banana Pi is the open-source hardware\ and software platform which is designed to assist
banana-pi.org.
ANITS Page - 9 -
DIFFERENCE BETWEEN RASPBERRY PI VS ORANGE PI
Gone are those days when Computer accessibility was a big deal even for the professionals
and scientists. The first computer ENIAC (1946) was as big as a room with a memory less
than what today‘s Microwave has. Things have changed now “Need is the mother of every
invention”.
There is always a Need which drives the change, which later on comes up with a solution.
This solution again depends upon the users (irrespective of any age) how they use it. These
single board computers (discussed below in detail) both Raspberry Pi vs orange Pi have some
disadvantages (misuse) in terms of usage. Originally these were created to promote the basics
of programming and understandings of software development among the kids. This devices
has been picked by the development and hacking industry and has become a central device in
a project since its development in the year 2013(Raspberry) and 2016 (Orange).
Both the Raspberry Pi vs orange Pi is open source single board computers. Single board
computers mean every computer components build on a single chip or circuit board. Either it
is the microprocessor, memory, I/O, and several other functionalities. These are low costs
devices (in comparison to PC‘s and Laptops) and unlike PC‘s these do not rely on expansion
slots for behaving functionally.
These are based on ARM architecture (aka Advanced RISC Machine). This design pattern is
one of the most crucial Computer Architecture as it consumes low energy, high performance
for multiple tasks, low cost and relatively low size. The mobile phones, tablets,
microprocessors, embedded systems mostly rely on this technology.
RASPBERRYPI Vs BANANA PI
Raspberry Pi is a series of small single board computers, with low cost that can plug into a
computer monitor or TV and uses a standard keyboard and mouse. It is developed in the
United Kingdom by the Raspberry Pi Foundation, to promote the teaching of basic computer
science in schools. It does not include peripherals like keyboard etc. but some accessories
have been included in many official and unofficial bundles.
ANITS Page - 10 -
influenced by Raspberry Pi. Thus, Banana Pi is compatible with Raspberry Pi boards. It can
also run on various operating systems like Raspbian, NetBSD, Android, Debian etc. It uses
Allwinner SoC (System on a chip) and covered by Linux-sunxi port.
Raspberry Pi is a capable little device that would enable people from all walks of life to
explore computer science and to learn how to program in languages like Python. It is
supposed to do everything you would expect from a desktop computer to do, ranging from
browsing the internet and playing definition video, making spreadsheets, word processing,
and playing games.
Raspberry Pi has a small size and accessible price; thus, it was quickly adopted by computer
enthusiasts for a project which requires more than a basic microcontroller. Although
Raspberry Pi is slower than modern laptop or desktop it is still a complete Linux computer
which can provide all expected functionalities at a low-power consumption level. The
foundation behind Raspberry Pi is a registered educational charity based in the UK whose
aim is to advance the education of children in the field of computer science and related
subjects. Raspberry Pi was designed for the Linux operating system and many Linux
distributions have a version optimized for Raspberry Pi. Two of the most popular is
Raspbian, based on the Debian operating system and Pidora, based on the Fedora operating
system. A good practice to select the operating system for Raspberry Pi would be the one
which most closely resembles an operating system user is familiar with, in either desktop or
server environment.
Banana pi provides an open source hardware platform which was produced to run Elastos.org
open source operating system. It is dual core, Android 4.2 product which is better than
Raspberry Pi. It is highly efficient with several Linux distributions in the market like Debian,
Ubuntu, OpenSuse and images that run on Raspberry Pi and Cubieboard. It consists of a
Gigabit Ethernet port and a SATA socket. The size of Banana pi M1 is about the same size as
a credit card. It has a potential to run the games smoothly as it supports 1080P high definition
ANITS Page - 11 -
video output. The GPIO is com patible with Raspberry Pi and it can run Raspberry pi images
directly. Several versions of Banana pi are available in the market like Banana pi M1, Banana
pi M+, Banana pi Pro, Banana pi G1 etc. All versions come along with the enhanced
capabilities like operating syste m support, available RAM, GPIO capabilities.
RASPBERRY PI
The organisation behind the Raspberry Pi consists of two arms. The first two models were
developed by the Raspberry Pi Foundation. After the Pi Model B was released, the
Foundation set up Raspberry Pi Trading, with Eben Upton as CEO, to dev elop the third
model, the B+. Raspberry Pi Trading is responsible for developing the technology while the
Foundation is an educational c harity to promote the teaching of basic computer science in
schools and in developing countries.
According to the Raspberry Pi oundation, more than 5 million Raspberry Pis were sold by
February 2015, making it the best-selling British computer. By November 2016 they had sold
11 million units, and 12.5m by March 2017, making it the third best-selling "general purpose
computer". In July 2017, sales reached nearly 15 million.In March 2018, sal es reached 19
million.
HARDWARE
The Raspberry Pi hardware has evolved through several versions that feature variations
in memory capacity and peripheral-device support.
ANITS Page - 12 -
PROCESSOR
This block diagram describes Model B and B+; Model A, A+, and the Pi Zero are similar, but
lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to
an additional USB port. In Model A, A+, and the Pi Zero, the USB port is connected directly
to the system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip
contains a five-port USB hub, of which four ports are available, while the Pi 1 Model B only
provides two. On the Pi Zero, the USB port is also connected directly to the SoC, but it uses
a micro USB (OTG) port.
The Broadcom BCM2835 SoC used in the first generation Raspberry Pi includes a
700 MHz ARM11 76JZF-S processor, VideoCore IV graphics processing unit(GPU), and
RAM. It has a level 1 (L1) cache of 16 KB and a level 2 (L2) cache of 128 KB. The level 2
cache is used primarily by the GPU. The SoC is stackedunderneath the RAM chip, so only its
edge is visible. The 1176JZ(F)-S is the same CPU used in the original iPhone, although at a
higher clock rate, and mated with a much faster GPU.
The earlier V1.1 model of the Raspberry Pi 2 used a Broadcom BCM2836 SoC with a
900 MHz 32-bit, quad-core ARM Cortex-A7 processor, with 256 KB shared L2 cache.The
Raspberry Pi 2 V1.2 was upgraded to a Broadcom BCM2837 SoC with a 1.2 GHz 64-
bit quad-core ARM Cortex-A53 processor, the same SoC which is used on the Raspberry
Pi 3, but underclocked (by default) to the same 900 MHz CPU clock speed as the V1.1. The
BCM2836 SoC is no longer in production as of late 2016.
The Raspberry Pi 3+ uses a Broadcom BCM2837B0 SoC with a 1.4 GHz 64-bit quad-
core ARM Cortex-A53 processor, with 512 KB shared L2 cache.
The Raspberry Pi Zero and ZeroW use the same Broadcom BCM2835 SoC as the first
generation Raspberry Pi, although now running at 1GHz CPU clock speed.
RAM
On the older beta Model B boards, 128 MB was allocated by default to the GPU, leaving
128 MB for the CPU.[34] On the first 256 MB release Model B (and Model A), three different
splits were possible. The default split was 192 MB (RAM for CPU), which should be
sufficient for standalone 1080p video decoding, or for simple 3D, but probably not for both
together. 224 MB was for Linux only, with only a 1080p framebuffer, and was likely to fail
ANITS Page - 13 -
for any video or 3D. 128 MB was for heavy 3D, possibly also with video decoding (e.g.
Kodi). Comparatively the Nokia 701 uses 128 MB for the Broadcom VideoCore IV.
For the later Model B with 512 MB RAM, new standard memory split files (arm256_start.elf,
arm384_start.elf, arm496_start.elf) were initially released for 256 MB, 384 MB and 496 MB
CPU RAM (and 256 MB, 128 MB and 16 MB video RAM) respectively. But a week or so
later the RPF released a new version of start.elf that could read a new entry in config.txt
(gpu_mem=xx) and could dynamically assign an amount of RAM (from 16 to 256 MB in
8 MB steps) to the GPU, so the older method of memory splits became obsolete, and a single
start.elf worked the same for 256 MB and 512 MB Raspberry Pis.
The Raspberry Pi 2 and the Raspberry Pi 3 have 1 GB of RAM. The Raspberry Pi Zero and
Zero W have 512 MB of RAM.
NETWORKING
The Model A, A+ and Pi Zero have no Ethernet circuitry and are commonly connected to a
network using an external user-supplied USB Ethernet or Wi-Fi adapter. On the Model B and
B+ the Ethernet port is provided by a built-in USB Ethernet adapter using the SMSC
LAN9514 chip. The Raspberry Pi 3 and Pi Zero W (wireless) are equipped with 2.4 GHz
WiFi 802.11n (150 Mbit/s) and Bluetooth 4.1(24 Mbit/s) based on the Broadcom
BCM43438 FullMAC chip with no official support for monitor mode but implemented
through unofficial firmware patching and the Pi 3 also has a 10/100 Mbit/s Ethernet port. The
Raspberry Pi 3B+ features dual-band IEEE 802.11b/g/n/ac WiFi, Bluetooth 4.2, and Gigabit
Ethernet (limited to approximately 300 Mbit/s by the USB 2.0 bus between it and the SoC).
SPECIAL-PURPOSE FEATURES
The Pi Zero can be used as a USB device or "USB gadget", plugged into another computer
via a USB port on another machine. It can be configured in multiple ways, for example to
show up as a serial device or an ethernet device. Although originally requiring software
patches, this was added into the mainline Raspbian distribution in May 2016.
The Pi 3 can boot from USB, such as from a flash drive. Because of firmware limitations in
other models, the Pi 3A, 3B and 3B+ are the only boards that can do this.
ANITS Page - 14 -
GENERAL PURPOS E INPUT-OUTPUT (GPIO) CONNECTOR
Raspberry Pi 1 Models A+ and B+, Pi 2 Model B, Pi 3 Models A+, B and B+, and Pi Zero
and Zero W GPIO J8 have a 40-pin pinout. Raspberry Pi 1 Models A and B have only the
first pins.
ANITS Page - 15 -
TOP 10 PROGRAMMING LANGUAGES PORTED TO THE
RASPBERRY PI
The raspberry pi was designed to encourage young people to learn how to code –the pi from
the raspberry pi comes from the Python programming language, so the very idea of
programming is written into the name of computer itself .
In the short time that the Raspberry Pi has been around, a considerable number of
programming languages have been adapted for the Raspberry Pi, either by the creator of the
language, who wanted to support the Pi by porting their creation, or by enthusiastic users who
wanted to see their language of choice available on their platform of choice.
Either way, this plethora of languages speaks volumes for the vibrant ecosystem that is
building up around the Pi, and suggests that with such great support, it will be around for a
long time to come.
Here‘s a quick rundown of some of the languages now available for you to program on the Pi.
Keep in mind that this list is not exhaustive. Remember: If a language can be compiled for
the ARMv6 chip, it can run on the Raspberry Pi.
SCRATCH
Scratch is an entry-level programming language that comes as standard with the Raspberry Pi
distribution, Raspbian. Scratch was originally created by the Lifelong Kindergarten Group at
the MIT Media Lab in Boston, U.S., with an aim to help young people learn mathematical
and computational concepts while having fun making things.
PYTHON
Python is one of the primary programming languages hosted on the Raspberry Pi. Did you
know that Python is named after Monty Python‘s Flying Circus, the comedy team who
brought us Life of Brian? (Which means Raspberry Pi is indirectly named after Monty
Python, too.)Guido Van Rossum, the Dutch programmer who created Python, was a big
Monty Python fan. Python‘s supporters have given Guido the title of Benevolent Dictator for
Life.
ANITS Page - 16 -
HTML5
HTML is the mark-up language that makes the World Wide Web tick. It was devised by Tim
Berners-Lee while he was working at CERN in Geneva as a means to allow scientists in the
organization to share their documents with each other. Before long, it went global.HTML is
the primary building block of the Internet — it tells your browser how to lay out each web
page, and lets one website link to another. The latest version is HTML5. Through its radical
redesign, it‘s made embedding videos or audio into webpages or writing apps that will run on
any smartphone or tablet easy.
JAVASCRIPT
2.6.3 JQUERY
JQuery is the most popular JavaScript library. It runs on any browser, and it makes the
scripting of HTML considerably simpler. With jQuery, you can create rich web interfaces
and interactive components with just a small amount of JavaScript knowledge.
JAVA:-
When Java arrived on the scene, it was greeted with open arms by developers as the first
programming language with which you could write a program that would run on any
operating system, Windows machines and Unix boxes alike, without having to re-write the
code.
This was a great leap forward. No longer did developers have to write in different languages
for each operating system, or compile different iterations for every computer they wanted
their code to run on. They could simply compile the code one time and it would run
anywhere.
ANITS Page - 17 -
It was originally designed for Interactive TV by its creators, James Gosling, Mike Sheridan,
and Patrick Naughton, and is named after the Java coffee that the creators consumed in
quantity.
C PROGRAMMING LANGUAGE
The C Programming language was written by Dennis Ritchie, using Brian Kernighan‘s B
language as its model. C is one of the most widely used languages in the world, utilized in
everything from complete operating systems to simple programming languages. Linux, the
operating system that runs the Raspberry Pi, is largely written in C and is built into all Linux
and Unix systems.
The design for C influenced a great many other programming languages, including Python,
Java, JavaScript, and a programming language called D. It was also extended as Objective C,
which is the language used to write apps for iPhones and iPads.
2.6.5 C++:-
C++ was developed by the Danish developer Bjarne Stroustrup as a way to enhance C. C++
is used in a million different circumstances, including hardware design, embedded software
(in mobile phones, for example), graphical applications, and programming video games. C++
adds object-oriented features to C. Other object-oriented languages are Java, Smalltalk, Ruby,
and .Net.
PERL:-
Perl has been called the ―duct tape that holds the Internet together‖ and the ―Swiss Army
chainsaw of scripting languages.‖ It was given these names because of its flexibility and its
adaptability. Before Perl came along, the Internet was but a collection of static pages.Perl
added a dynamic element, which meant that for the first time, websites could be put together
on the fly. Among other things, it enabled ecommerce and sites such as Amazon and eBay to
come into being.
ERLANG:-
Erlang is a programming language used when there is no room for failure. You might use
Erlang if you were running a nuclear power plant or if you were designing a new air traffic
control system: mission-critical situations where the computer breaking down would spell
ANITS Page - 18 -
disaster.In this project we used the programming language Python. The following steps are
taken before using the python language.
IPYTHON:
Then run with ipython from the command line. It works like the standard python3, but has
more features. Try typing len? and hitting Enter. You're shown information including the
docstring for the len function:
Type: builtin_function_or_method
Docstring:
len(object)-> integer
apt
Some Python packages can be found in the Raspbian archives, and can be installed using apt,
for example:
This is a preferable method of installing, as it means that the modules you install can be kept
up to date easily with the usual sudo apt update and sudo apt upgrade commands.
pip
Not all Python packages are available in the Raspbian archives, and those that are can
sometimes be out of date. If you can't find a suitable version in the Raspbian archives, you
can install packages from the Python Package Index (known as PyPI).
ANITS Page - 19 -
Then install Python packages (e.g. simplejson) with pip3:
A. EXTENSIVE LIBRARIES
Like we mentioned in our article on Python features, it downloads with an extensive library.
These contain code for various purposes like regular expressions, documentation-generation,
unit-testing, web browsers, threading, databases, CGI, email, image manipulation, and more.
So, we don‘t have to write the complete code for that manually.
B. EXTENSIBLE
As we have seen earlier, Python can be extended to other languages. You can write some of
your code in languages like C++ or C. This comes in handy, especially in projects.
C. EMBEDDABLE
Complimentary to extensibility, Python is embeddable as well. You can put your Python code
in your source code of a different language, like C++. This lets us add scripting capabilities to
our code in the other language.
ANITS Page - 20 -
D. IMPROVED PRODUCTIVITY
The language‘s simplicity and extensive libraries render programmers more productive than
languages like Java and C++ do. Also, the fact that you need to write less lets more get done.
E. IOT OPPORTUNITIES
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
Internet Of Things. This is a way to connect the language with the real world.
When working with Java, you may have to create a class to print ‗Hello World‘. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code. This
is why when people pick up Python, they have a hard time adjusting to other more verbose
languages like Java.
G. READABLE
Because it is not such a verbose language, reading Python is much like reading English. This
is also why it is so easy to learn, understand, and code. It also does not need curly braces to
define blocks, and indentation is mandatory. This further aids the readability of the code.
H. OBJECT-ORIENTED
This language supports both the procedural and object-oriented programming paradigms.
While functions help us with code reusability, classes and objects let us model the real world.
A class allows the encapsulation of data and functions into one.
Like we said earlier, Python is freely available. But not only can you download python for
free, but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.
J. PORTABLE
When you code your project in a language like C++, you may need to make some changes to
it if you want to run it on another platform. But it isn‘t the same with Python. Here, you need
ANITS Page - 21 -
to code only once, and you can run it anywhere. This is called Write Once Run Anywhere
(WORA). However, you need to be careful enough not to include any system-dependent
features.
K. INTERPRETED
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
So far, we‘ve seen why Python is a great choice for our project. But if you must choose it,
you should be aware of its consequences as well. Let‘s now see the downsides of choosing
Python over another language.
A. SPEED LIMITATIONS
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn‘t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.
While it serves as an excellent server-side language, Python is much rarely seen on the client-
side. Besides that, it is rarely ever used to implement smartphone-based applications. One
suchapplicationis called Carbonnelle.
C. DESIGN RESTRICTIONS
As you know, Python is dynamically-typed. This means that you don‘t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what‘s that? Well, it
just means that if it looks like a duck, it must be a duck. While this is easy on the
programmersduring coding, it can raise run-time errors.In Any query regarding the
advantages and disadvantages of Python programming language, tutorial feel free to drop a
comment.
ANITS Page - 22 -
D. UNDERDEVELOPED DATABASE ACCESS LAYERS
Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and
ODBC (Open DataBase Connectivity), Python‘s database access layers are a bit
underdeveloped. Consequently, it is less often applied in huge enterprises.
E. SIMPLE
No, we‘re not kidding. Python‘s simplicity can indeed be a problem. Take my example. I
don‘t do Java, I‘m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.This was all about the Advantages and Disadvantages of
Python Programming Language.
The Raspberry Pi supports several OSes and as such usually comes without one. Most of the
time, however, it ships with an SD card that includes NOOBS (New Out Of the Box
Software) – an OS that includes of a variety of Operating Systems from which you can
choose which to or you to choose which to run on your Raspberry Pi setup.This list includes
the Operating Systems typically in NOOBS and more.
RASPBIAN OS
Raspbian is a Debian-based engineered especially for the Raspberry Pi and it is the perfect
general-purpose OS for Raspberry users. It employs the Openbox stacking window manager
and the Pi Improved Xwindows Environment Lightweight coupled with a number of pre-
installed software which includes Minecraft Pi, Java, Mathematica, and Chromium.
OSMC
OSMC (Open Source Media Center) is a free, simple, open-source, and easy-to-use
standalone Kodi OS capable of playing virtually any media format.It features a modern
beautiful minimalist User Interface and is completely customizable thanks to the several builti-
in images that it comes with. Choose OSMC if you run the Raspberry Pi for managing media
content.
2.8.3. OPENELEC
OpenELEC (Open Embedded Linux Entertainment Center) is a small Linux-based JeOS (Just
enough Operating System) developed from scratch to turn PCs into a Kodi media center.
ANITS Page - 23 -
RISC
RISC OS is a unique open-source OS designed specifically for ARM processors by the
creators of the original ARM. It is neither related to Linux nor Windows and is being
maintained by a dedicated community of volunteers. If you want to choose RISC OS, you
should know that it is very different from any Linux distro or Windows OS you have used so
it will take some getting used to. A good place to start is here.
LAKKA
Lakka is a free, lightweight, and open-source distro with which you can turn even the
smallest PC into a full-blown game console without the need for a keyboard or mouse. It
features a beautiful User Interface and so many customization options you might get
overwhelmed. Its PS4-like UX brings style to the Raspberry Pi so pick it if you‘re a gamer.
2.8.7. RASPBSD
RaspBSD is a free and open-source image of FreeBSD 11 that has been preconfigured in 2
images for Raspberry Pi computers. If you didn‘t know, FreeBSD isn‘t Linux, but it works in
pretty much the same way as it is a descendant of the research by the Berkeley Software
Distribution and it is among the world‘s most broadly used Operating Systems today with its
code existing in game consoles e.g. PlayStation 4, macOS, etc.
RETROPIE
RetroPie is an open-source Debian-based software library with which you can emulate retro
games on your Raspberry Pi, PC, or ODroid C1/C2 and it currently stands as the most
popular option for that task. RetroPie used the EmulationStation frontend and SBC to offer
users a pleasant retro gaming experience so you can‘t go wrong with it.
ANITS Page - 24 -
UBUNTU CORE
Ubuntu Core is the version of Ubuntu designed for Internet of Things applications. Ubuntu is
the most popular Linux-based Operating System in the world with over 20+ derivatives and
given that it has an active and welcoming forum, it will be easy to get up and running
with Ubuntu Snappy Core on your Raspberry Pi.
LINUTOP
Linutop OS is a secure Raspbian-based Web Kiosk and digital signage player. It is dedicated
to professionals with the need to deploy public Internet stalls and digital signage solutions
using Raspberries. This OS is perfect if you run hotels, restaurants, shops, city halls, offices,
museums, etc. and it is compatible with Raspberry Pi B, B+ and 2.
A Haar Cascade is basically a classifier which is used to detect the object for which it has
been trained for, from the source. The Haar Cascade is trained by superimposing the positive
image over a set of negative images. The training is generally done on a server and on various
stages. Object Detection using Haar feature-based cascade classifiers is an effective object
detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object
Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning
based approach where a cascade function is trained from a lot of positive and negative
images. It is then used to detect objects in other images.Here we will work with face
detection. Initially, the algorithm needs a lot of positive images (images of faces) and
negative images (images without faces) to train the classifier. Then we need to extract
features from it. For this, Haar features shown in the below image are used. They are just like
our convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels
under the white rectangle from sum of pixels under the black rectangle.
ANITS Page - 25 -
Fig 2.3. An example for HarrCascade Classifier o/p
Now, all possible sizes and locations of each kernel are used to calculate lots of features. (Just
imagine how much computation it needs? Even a 24x24 window results over 160000
features). For each feature calculation, we need to find the sum of the pixels under white and
black rectangles. To solve this, they introduced the integral image. However large your
image, it reduces the calculations for a given pixel to an operation involving just four pixels.
Nice, isn't it? It makes things super-fast.But among all these features we calculated, most of
them are irrelevant. For example, consider the image below. The top row shows two good
features. The first feature selected seems to focus on the property that the region of the eyes is
often darker than the region of the nose and cheeks. The second feature selected relies on the
property that the eyes are darker than the bridge of the nose. But the same windows applied to
cheeks or any other place is irrelevant. So how do we select the best features out of 160000+
features? It is achieved by Adaboost.
ANITS Page - 26 -
For this, we apply each and every feature on all the training images. For each feature, it finds
the best threshold which will classify the faces to positive and negative. Obviously, there will
be errors or misclassifications. We select the features with minimum error rate, which means
they are the features that most accurately classify the face and non-face images. (The process
is not as simple as this. Each image is given an equal weight in the beginning. After each
classification, weights of misclassified images are increased. Then the same process is done.
New error rates are calculated. Also new weights. The process is continued until the required
accuracy or error rate is achieved or the required number of features are found).The final
classifier is a weighted sum of these weak classifiers. It is called weak because it alone can't
classify the image, but together with others forms a strong classifier. The paper says even 200
features provide detection with 95% accuracy. Their final setup had around 6000 features.
(Imagine a reduction from 160000+ features to 6000 features. That is a big gain).So now you
take an image. Take each 24x24 window. Apply 6000 features to it. Check if it is face or not.
Wow.. Isn't it a little inefficient and time consuming? Yes, it is. The authors have a good
solution for that.In an image, most of the image is non-face region. So it is a better idea to
have a simple method to check if a window is not a face region. If it is not, discard it in a
single shot, and don't process it again. Instead, focus on regions where there can be a face.
This way, we spend more time checking possible face regions.For this they introduced the
concept of Cascade of Classifiers. Instead of applying all 6000 features on a window, the
features are grouped into different stages of classifiers and applied one-by-one. (Normally the
first few stages will contain very many fewer features). If a window fails the first stage,
discard it. We don't consider the remaining features on it. If it passes, apply the second stage
of features and continue the process. The window which passes all stages is a face region.
How is that plan!The authors' detector had 6000+ features with 38 stages with 1, 10, 25, 25
and 50 features in the first five stages. (The two features in the above image are actually
obtained as the best two features from Adaboost). According to the authors, on average 10
features out of 6000+ are evaluated per sub-window.
So this is a simple intuitive explanation of how Viola-Jones face detection works. Read the
paper for more details or check out the references in the Additional Resources section.
ANITS Page - 27 -
HAAR-CASCADE DETECTION IN OPENCV
OpenCV comes with a trainer as well as detector. If you want to train your own classifier for
any object like car, planes etc. you can use OpenCV to create one. Its full details are given
here: Cascade Classifier Training.Here we will deal with detection. OpenCV already contains
many pre-trained classifiers for face, eyes, smiles, etc. Those XML files are stored in the
opencv/data/haarcascades/ folder. Let's create a face and eye detector with OpenCV.First we
need to load the required XML classifiers. Then load our input image (or video) in grayscale
mode.
import numpy as np
import cv2 as cv
face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml')
img = cv.imread('sachin.jpg')
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
Now we find the faces in the image. If faces are found, it returns the positions of detected
faces as Rect(x,y,w,h). Once we get these locations, we can create a ROI for the face and
apply eye detection on this ROI (since eyes are always on the face !!! ).
ANITS Page - 28 -
OPENCV
OpenCV was started at Intel in 1999 by Gary Bradsky and the first release came out in 2000.
Vadim Pisarevsky joined Gary Bradsky to manage Intel‘s Russian software OpenCV team. In
2005, OpenCV was used on Stanley, the vehicle who won 2005 DARPA Grand Challenge.
Later its active development continued under the support of Willow Garage, with Gary
Bradsky and Vadim Pisarevsky leading the project. Right now, OpenCV supports a lot of
algorithms related to Computer Vision and Machine Learning and it is expanding day-by-
day. Currently OpenCV supports a wide variety of programming languages like C++, Python,
Java etc and is available on different platforms including Windows, Linux, OS X, Android,
iOS etc. Also, interfaces based on CUDA and OpenCL are also under active development for
high-speed GPU operations. OpenCV-Python is the Python API of OpenCV. It combines the
best qualities of OpenCV C++ API and Python language.
OPENCV-PYTHON
Python is a general purpose programming language started by Guido van Rossum, which
became very popular in short time mainly because of its simplicity and code readability. It
enables the programmer to express his ideas in fewer lines of code without reducing any
readability. Compared to other languages like C/C++, Python is slower. But another
important feature of Python is that it can be easily extended with C/C++. This feature helps
us to write computationally intensive codes in C/C++ and create a Python wrapper for it so
that we can use these wrappers as Python modules. This gives us two advantages: first, our
code is as fast as original C/C++ code (since it is the actual C++ code working in
background) and second, it is very easy to code in Python. This is how OpenCV-Python
works, it is a Python wrapper around original C++ implementation. And the support of
Numpy makes the task more easier. Numpy is a highly optimized library for numerical
operations. It gives a MATLAB-style syntax. All the OpenCV array structures are converted to-
and-from Numpy arrays. So whatever operations you can do in Numpy, you can combine it
with OpenCV, which increases number of weapons in your arsenal. Besides that, several other
libraries like SciPy, Matplotlib which supports Numpy can be used with this. So OpenCV-
Python is an appropriate tool for fast prototyping of computer vision problems.
ANITS Page - 29 -
Numpy.
Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials).
2. Install all packages into their default locations. Python will be installed to C:/Python27/.
3. After installation, open Python IDLE. Enter import numpy and make sure Numpy is
working fine.
4. Download latest OpenCV release from sourceforge site and double-click to extract it.
7. Goto opencv/build/python/2.7 folder.
8. Copy cv2.pyd to C:/Python27/lib/site-packeges.
9. Open Python IDLE and type following codes in Python terminal.
>>> import cv2
>>> print cv2. version
If the results are printed out without any errors, congratulations !!! You have installed OpenCV-
Python successfully.
1) numpy.ndarray=compoisite datatype
blue=255
green=254*********just an example
red=255
2) uint8=Unsinged Integer bits
00000000 0x00 0
11111111 0xFF 255
hence we have got all the numbers within 0-255
3)Image is the set of numbers
Each and set(3 numbers) represents 2 pul
Every 2 pul is a pixel
Image=Pixel arranged in vertical and horizontal ways
4)From point 1..we have 3 numbers in a set
hence 256^3=256*256*256=1677216
This is a very big number hence it can store a lotb of data
5)img.size
ANITS Page - 30 -
img.shape
img.dtype
img.ndim
[[[255 254 255]
[255 254 255]
[255 254 255]
...
[255 254 255]
[255 254 255]
[255 254 255]]
[[255 254 254]
[255 254 254]
[255 254 254]
...
[211 206 237]
[215 210 234]
[217 211 230]]
[[215 217 235]
ANITS Page - 31 -
2. All the different numbers are just shades of two colors
0-Black
255-White
3. Hence dimensions are two
ANITS Page - 32 -
3. In openCV image is read as BGR not as RGB, when we use read and show both in openCv
there isnt any problem. The problem rises when we use matplotlib
4. In matplotlib i.e; whe we use plt then the image is taken or shown in the form of
RBG only.We can take input only through OpenCV i.e; in BGR form
ANITS Page - 33 -
7.The main are COLOR_BGR2RGB
COLOR_BGR2HSV
COLOR_BGR2GRAY
Using Webcam:
1.Generally we have the video in milliseconds using openCV
2.If we give waitkey(1) it means is the video is going fast
30fps rate
1000/ video capture rate= 1000/30
= 33
3.33 * 30= 990 = 1000(approx)
Hence it is 1ms now
For grayscale images the pixel value is a single number that represents the brightness of that
pixel, the most common pixel format is the byte image, which is stored as an 8-bit integer
giving a range of possible values from 0 to 255. As a convention is taken to be black, and 255
is taken to be white the values in between make up the different shades of gray.
To represent color images, separate red, green and blue components must be specified for each
pixel (assuming a RGB color model), and so the pixel `value‘ becomes a vector of three
numbers. Often the three different components are stored as three separate `grayscale‘ images
known as color planes (one for each of red, green and blue), which have to be recombined
when displaying or processing.
ANITS Page - 34 -
Fig 2.5. Representation of an image as a matrix
Now allow me to introduce the color models formally as follows, a color model is an abstract
mathematical model describing the way colors can be represented as tuples of numbers,
typically as three or four values or color components. When this model is associated with a
precise description of how the components are to be interpreted (viewing conditions, etc.), the
resulting set of colors is called color space.
Once known how the images could be represented, let‘s focus on the image processing side
and specifically with OpenCV and python.
Image Acquisition
OpenCV gives the flexibility to capture image directly from a pre-recorded video stream,
camera input feed, or a directory path.
ANITS Page - 35 -
#Taking input from a directory path
img = cv2.imread('C:\Users\USER\Desktop\image.jpg',0)
#Capturing input from a video stream
cap = cv2.VideoCapture(0)
Depending on the use case there are various methods which could be applied, some very
common ones are as follows :
a. Histogram Equalization :
img = cv2.imread('wiki.jpg',0)
#Applying Histogram Equalization
equ = cv2.equalizeHist(img)
#Save the image
cv2.imwrite('res.png',equ)
ANITS Page - 36 -
Erosion and Dilation belong to the group of morphological transformations and widely used
together for the treatment of noise or detection of intensity bumps.
c. Image Denoising
Noise has a very peculiar property that its mean is zero, and this is what helps in its removal
by averaging it out.
ANITS Page - 37 -
Fig 2.7. Denoised image
The main concepts which we will be dealing with in our code are operations on images.
Below is an output for blending of images.
d.Grayscale convertion:
We will now be dealing with grayscale convertion of image and we will also see the negative
of the image.
ANITS Page - 38 -
img1=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img2=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
r, g, b=cv2.split(img1)
print()
for i in range(4):
plt.subplot(2, 2, i+1)
plt.imshow(images[i], cmap='gray' )
plt.title(titles[i])
plt.xticks([])
plt.yticks([])
plt.show()
ANITS Page - 39 -
Fig 2.9. Convertion of color image to grayscale
for i in range(3):
plt.subplot(1, 3, i+1)
plt.imshow(output[i], cmap='gray')
ANITS Page - 40 -
plt.title(titles[i])
plt.xticks([])
plt.yticks([])
This is the concept in which we‘ll how to convert the live video into a binary one .Here is the
code for the convertion.
import cv2
def main():
windowName="Live video"
cap = cv2.VideoCapture(0) # represents 1st webcam
if cap.isOpened():
ret, img = cap.read()
ANITS Page - 41 -
else:
ret = False
while ret:
ret,frame = cap.read()
th=127
max_val=255
ANITS Page - 42 -
Output:
We must set a particular threshold value to make the pixels into desired color and the
reaming values into any other color. This technique is used even fo tracking a
particular coloured object.
For example a white coloured vehicle,a black van e.t.c
ANITS Page - 43 -
CHAPTER 3
PROPOSED METHODOLOGY
ANITS Page - 44 -
ALGORITHM:
Start
Read each and every frame from the camers i.e; live video
Capture each frame and detect the eyes in every framr
Convert the RGB image(actually it takes BGR)into grayscale image.
In gray scale we have different shades of black.
Using a threshold value convert the grayscale into binary image.
Divide the image into three sections.
Count the number of black pixels in every section.
If the number of black pixels are more in right section your logic must drive the driver
tor to …right,similarly to the left.
If the number of black pixels are more in middle section and verify the black pixles in
upper ….and lower regions.
If the number of black pixels are more in top section your logic must drive the driver
motor …forward,similarly to the back.
If there are no black pixels detected continuously for five frames stop the motor.
Stop
CODE
import cv2
import numpy as np
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor1A = 16
Motor1B = 18
Motor1E = 22
Motor2A = 23
Motor2B = 21
Motor2E = 19
ANITS Page - 45 -
GPIO.setup(Motor1A,GPIO.OUT)
GPIO.setup(Motor1B,GPIO.OUT)
GPIO.setup(Motor1E,GPIO.OUT)
GPIO.setup(Motor2A,GPIO.OUT)
GPIO.setup(Motor2B,GPIO.OUT)
GPIO.setup(Motor2E,GPIO.OUT)
face_cascade=cv2.CascadeClassifier('/home/pi/opencv3.3.0/data/haarcascades/haarcascade_front
alface_default.xml')
eye_cascade=cv2.CascadeClassifier('/home/pi/opencv3.3.0/data/haarcascades/haarcascade_eye.x
ml')
while 1:
count=1
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
ANITS Page - 46 -
count=count+1
cv2.imwrite(s1,crop_img)
cv2.imshow('img',img)
#it realses the camera from operation..disengaged
imgpath='/home/pi/New/final.jpg'
img=cv2.imread(imgpath,1)
imgpath2='/home/pi/New/final2.jpg'
img2=cv2.imread(imgpath2,1)
th=75
max_val=255
for i in range(rows):
for j in range(coloumns):
if j < coloumns/3:
if np.any(o1[i, j]==0):
p=p+1
elif j>coloumns/3 and j<(2*coloumns)/3 :
if np.any(o1[i, j]==0):
q=q+1
ANITS Page - 47 -
elif j>(2*coloumns)/3 and j<coloumns:
if np.any(o1[i, j]==0):
z=z+1
a=0
b=0
c=0
GPIO.output(Motor2A,GPIO.LOW)
GPIO.output(Motor2B,GPIO.HIGH)
GPIO.output(Motor2E,GPIO.HIGH)
if z<q and p<q:
for i in range(rows):
for j in range(coloumns):
if i < rows/3:
if np.any(o1[i, j]==0):
a=a+1
elif i>rows/3 and i<(2*rows)/3 :
if np.any(o1[i, j]==0):
b=b+1
elif i>(2*rows)/3 and i<rows:
if np.any(o1[i, j]==0):
c=c+1
ANITS Page - 48 -
GPIO.output(Motor2B,GPIO.HIGH)
GPIO.output(Motor2E,GPIO.HIGH)
if a<b and c<b:
GPIO.output(Motor1E,GPIO.LOW)
GPIO.output(Motor2E,GPIO.LOW) #stop
if a<c and b<c:
GPIO.output(Motor1A,GPIO.HIGH)
GPIO.output(Motor1B,GPIO.LOW)
GPIO.output(Motor1E,GPIO.HIGH)
GPIO.output(Motor2A,GPIO.HIGH) #back
GPIO.output(Motor2B,GPIO.LOW)
GPIO.output(Motor2E,GPIO.HIGH)
if q<p and z<p:
GPIO.output(Motor1A,GPIO.LOW)
GPIO.output(Motor1B,GPIO.HIGH)
GPIO.output(Motor1E,GPIO.HIGH)
GPIO.output(Motor2A,GPIO.HIGH) #right
GPIO.output(Motor2B,GPIO.LOW)
GPIO.output(Motor2E,GPIO.HIGH)
GPIO.cleanup()
if cv2.waitKey(1)== 27:
break
cv2.destroyAllWindows()
cap.release()
This code finds the region of eye in which the number of black pixels are more.Based upon
the logic that drives L293D Driver motor IC,the motors rotate ,thereby moving the wheel
chair.The simulated results are shown in the next chapter.
ANITS Page - 49 -
CHAPTER 4
RESULTS
ANITS Page - 50 -
4.1 UNDERSTANDING THE CODE
Webcam captures video of frame size which had to be placed at a distance of about from
any of the eye. Obtained coloured frames were converted into grayscale images. These
images were converted into binary by taking a required threshold value.
Resultant image was then divided into three parts i.e. left, middle and right. Numbers of
black pixel were counted in all three regions and the maximum of three decided the logic to
drive a wheelchair. If all the three regions had no black pixels, then the blink was assumed to
be detected.
Deciding the break points to make a wheelchair automatic, user needed to have all the control
to start and stop the system. For this we developed a logic based on blink detection. A flag
variable was defined which decides ON/OFF control of motors. Once, four consecutive blink
were detected the flag variable was inverted and the state of motor was changed from ON
state to OFF state or vice-versa.
ANITS Page - 51 -
THESE ARE FEW OUTPUTS:
Output:
9
274
281
Final:281
Thus the logic drives the IC in ―right‖ direction
ANITS Page - 52 -
Output:
85
864--middle section is more thus checks the upper and lower too
619
345
350
873-lower section is more
ANITS Page - 53 -
Final:873 thus the logic drives the IC in ―backward‖ Direction
Output:
119
25
0
Final:119
Hence the logic drives the IC in the ―left‖ direction.
ANITS Page - 54 -
Output:
30
285
7
385
0
0
Final:385
Hence the logic drives the IC in ―upward‖ direction
ANITS Page - 55 -
CHAPTER 5
CONCLUSIONS
&
FUTURE SCOPE
ANITS Page - 56 -
CONCLUSION
In this project a specific technique to design an automated wheelchair through eyeball
detection for the physically challenged individuals is presented. The result obtained is
suitable for driving the wheelchair smoothly for any physically impaired person of any
age.That is by moving the eye in the specific direction,the wheel chair will be moved in the
desired direction.The output obtained will also be accurate as we are capturing every frame of
the eyeball and then counting the pixcels in each frame .So, Besides this IR LEDs can be
turned OFF during day time to reduce power consumption and exposure of eye to IR rays
which on constant exposure can cause damage to eye. Final system is fully robust.Raspberry
Pi is the latest of the technologies available in the modern world. But what makes Raspberry
Pi special is that it has its own Operating System, thereby reducing the circuitry on the
person‘s body. Cost also reduces a lot compared to other systems. Therefore, this method
works as a boon for differently able persons especially Quadriplegia. We have been able to
make the system very accurate.
FUTURE SCOPE:
The same concept can even be used in automobiles i.e. based upon the eye blink the
vehicles can be stopped. If the person is sleepy or drowsy, he/she blinks the eye for more
number of times and then our device detects the blink and sends a signal to stop the car.
This can also be used in robotics, i.e. we can monitor robot with our eyeball movement
based on the application in which it is used.
ANITS Page - 57 -
REFERENCES
[1] E. Verdú et al., ―A distributed system for learning programming on-line,‖ Computer. Educ.,
vol. 58, no. 1, pp. 1–10, 2012
[2] S. Fitzgerald et al., ―Debugging: Finding, fixing and flailing, a multi institutional study of
novice debuggers,‖ Comput. Sci. Educ., vol. 18, pp. 93–116, 200
[4] S. Xu and V. Rajlich, ―Cognitive process during program debugging,‖ in Proc. 3rd IEEE
ICCI, 2004, pp. 176–182
ANITS Page - 58 -