0% found this document useful (0 votes)
27 views61 pages

Highly Time and Power Sacrifised

The project report titled 'AI Assisted Eye for the Blind' aims to enhance the autonomy and mobility of visually impaired individuals through AI-powered technology that converts visual data into auditory or tactile feedback. The system utilizes various technologies, including computer vision and natural language processing, to assist users in navigating, recognizing objects, and reading text, ultimately promoting independence and social inclusion. The project is developed by students of SJB Institute of Technology under the guidance of faculty and emphasizes affordability and accessibility for diverse users.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views61 pages

Highly Time and Power Sacrifised

The project report titled 'AI Assisted Eye for the Blind' aims to enhance the autonomy and mobility of visually impaired individuals through AI-powered technology that converts visual data into auditory or tactile feedback. The system utilizes various technologies, including computer vision and natural language processing, to assist users in navigating, recognizing objects, and reading text, ultimately promoting independence and social inclusion. The project is developed by students of SJB Institute of Technology under the guidance of faculty and emphasizes affordability and accessibility for diverse users.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“Jnana Sangama”, Belagavi-590018

A Project Report
On

“AI ASSISTED EYE FOR THE BLIND”

SUBMITTED IN PARTIAL FULFILLMENT FOR THE AWARD OF DEGREE OF

BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING
SUBMITTED BY

NISHANTH C N (1JB21CS099)
NISCHITH H U (1JB21CS098)
N P SOORAJ (1JB21CS092)

Under the Guidance of


Dr. Prakruthi M.K
Associate Professor,
Dept. of CSE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


SJB INSTITUTE OF TECHNOLOGY
No.67, BGS Health & Education City, Dr.Vishnuvardhan Rd, Kengeri, Bengaluru, Karnataka 560060

2024 - 2025
|| Jai Sri Gurudev ||
Sri Adichunchanagiri Shikshana Trust ®
SJB INSTITUTE OF TECHNOLOGY
No.67, BGS Health & Education City, Dr.Vishnuvardhan Rd, Kengeri, Bengaluru, Karnataka 560060

Department of Computer Science and Engineering

CERTIFICATE

Certified that the Project Work entitled “AL ASSISTED EYE FOR THE BLIND”
carried out by NISCHITH H.U, NISHANTH C.N , N.P SOORAJ bearing USN 1JB21CS098,
1JB21CS099, 1JB21CS092 are bonafide students of SJB Institute of Technology in
partial fulfilment for 7th semester of BACHELOR OF ENGINEERING in COMPUTER
SCIENCE AND ENGINEERING of the Visvesvaraya Technological University, Belagavi
during the academic year 2024-25. It is certified that all corrections/suggestions indicated
for Internal Assessment have been incorporated in the Report deposited in the
Departmental library. The project report has been approved as it satisfies the academic
requirements in respect of Project work prescribed for the said Degree.

Signature of Guide Signature of HOD Signature of Principal


Prakruthi M K Dr. Krishna A N Dr. K. V. Mahendra Prashanth
Associate Professor Professor and Head Principal, SJBIT
Dept. of CSE, SJBIT Dept. of CSE, SJBIT

Name of the Examiners Signatures

1. __________________________ ______________________________

2.___________________________ ______________________________
ACKNOWLEDGEMENT

We would like to express our profound grateful to His Divine Soul Jagadguru Padmabhushan Sri Sri Sri Dr.
Balagangadharanatha Mahaswamiji and His Holiness Jagadguru Sri Sri Sri Dr. Nirmalanandanatha
Mahaswamiji for providing us an opportunity to complete our academics in this esteemed institution.

We would also like to express our profound thanks to Revered Sri Sri Dr. Prakashnath Swamiji, BGS &
SJB Group of Institutions, for his continuous support in providing amenities to carry out this Project Work in
this admired institution.

We express our gratitude to Dr. Puttaraju, Academic Director, BGS & SJB Group of Institutions, for providing
us an excellent facilities and academic ambience; which have helped us in satisfactory completion of
Project work.

We express our gratitude to Dr. K. V. Mahendra Prashanth, Principal, SJB Institute of Technology, for
providing us an excellent facilities and academic ambience; which have helped us in satisfactory
completion of Project work.

We extend our sincere thank to all the Dean of SJB Institute of Technology, for providing us an invaluable
support throughout the period of our Project work.

We extend our sincere thanks to Dr. Krishan A N, Head of the Department, Computer Science and
Engineering for providing us an invaluable support throughout the period of our Project work.

We wish to express our heartfelt gratitude to our Project Coordinator Dr.Roopa M.J Associated
Professor, Department of CSE & guide Dr. Prakruthi M K Associate Professor Department of CSE for
her valuable guidance, suggestions and cheerful encouragement during the entire period of our Project work.

Finally, we take this opportunity to extend our earnest gratitude and respect to our parents, Teaching & Non
teaching staffs of the department, the library staff and all our friends, who have directly or indirectly supported
us during the period of our Project work.

Regards,

NISCHITH H.U [1JB21CS098]


NISCHANTH C.N [1JB21CS099]
N.P SOORAJ [1JB21CS092]
ABSTRACT

The AI-assisted eye for the blind project aims to revolutionize the lives of blind and visually
impaired individuals by enhancing their autonomy, mobility, and independence through advanced
technology. By utilizing AI-powered systems, the project converts real-time visual data into auditory
or tactile feedback, assisting users in navigating their environment, recognizing objects, reading
text, and engaging in social interactions. Leveraging technologies like computer vision, machine
learning, and natural language processing, the system provides essential support, improving the
user’s ability to move confidently, recognize everyday objects, and perform tasks such as reading
signs, labels, and printed materials, all through voice feedback.

Incorporating wearable devices such as smart glasses or smartphones, the project aims to offer a
user-friendly, intuitive solution that can be easily adapted to diverse environments and individual
needs. The technology is designed to be affordable and accessible, ensuring it reaches a broad
demographic, including underserved communities. By emphasizing accessibility and social inclusion,
this AI-assisted system fosters greater independence for blind individuals, allowing them to navigate
the world with greater ease and participate more fully in social and daily life activities, ultimately
promoting equality and opportunities for all.

ii
TABLE OF CONTENTS
Acknowledgement i
Abstract ii
Table of Contents iii
List of Figures iv

Chapter 1
Introduction
1.1 Introduction to Internet of Things .........................................................................1
1.2 Functioning of IoT ............................................................................................... 2
1.3 Future Vision of IoT .............................................................................................. 3
1.4 IoT : Services and Applications ............................................................................ 4
1.5 Problem statement ................................................................................................. 5
1.6 Goals and Objectives of the project ...................................................................... 5

Chapter 2
Literature survey
2.1 Design of Smart e-Tongue for the Physically Challenged People ........................ 6
2.2 MyVox-Device for the communication between people: blind, deaf,
deaf- blind and unimpaired .................................................................................... 7
2.3 PiCam:IoT based Wireless Alert System for Deaf and Hard of Hearing .............. 7
2.4 Design of a communication aid for physically challenged ....................................8
2.5 Implementation Of Gesture Based Voice And Language Translator For
Dumb People.......................................................................................................... 9
2.6 Intelligent Gesture Recognition System for Deaf and Dumb Individuals .......... 10
2.7 Voice-Assisted Smart Glasses for the Blind......................................................... 11
2.8 Sign Language Translation via Neural Networks ............................................... 11
2.9 IoT-Enabled Alert System for Hearing Impaired Individuals ..............................12
2.10 Low-Cost Braille e-Reader for the Visually Impaired ...................................... 12

Objectives and Methodology


iii
3.1 Objectives of the proposed system ................................................................................ 13
3.1.1 Advantages of the proposed system ............................................................................ 13
3.1.2 Drawbacks of the proposed system ..............................................................................13
3.2 Methodology of the proposed system ............................................................................ 14

Chapter 4
System requirements
4.1 What is system analysis ...................................................................................... 16
4.2 Software Requirement Specification .................................................................... 16
4.3 Hardware Components ......................................................................................... 17
4.4 Software Components ......................................................................................... 17
4.5 Hardware Component Description ....................................................................... 18
4.6 Software Component Description ........................................................................24
4.6.1 Raspbian OS...................................................................................................... 24
4.6.2 Tesseract OCR................................................................................................... 24
4.6.3 Open CV ........................................................................................................... 25
4.6.5 Xming ............................................................................................................... 27
4.6.6 Putty ................................................................................................................ 28

Chapter 5
System architecture
5.1 High Level Design .............................................................................................. 29
5.1.1 System Architecture ......................................................................................... 29
5.1.2 Data Flow Diagram ...........................................................................................30
5.1.3 Sequence Diagram ........................................................................................... 31
5.1.4 Use Case Diagram ............................................................................................. 33

Implementation
6.1 Overview of System Implementation ................................................................... 35
6.2 Module Description.............................................................................................. 36
ii
6.2.1 Text-to-speech (TTS) ....................................................................................... 36
6.2.2 Image-to-speech using camera (ITSC) .............................................................37
6.2.3 Gesture-to-speech (GTS) ................................................................................. 37
6.2.4 Speech-to-Text (STT) .......................................................................................38
6.3 Pseudo code .........................................................................................................38
6.3.1 Image-to-speech using camera (ITSC) .............................................................39

Chapter 7
Testing
7.1 Unit testing...........................................................................................................40
7.2 Integration testing ................................................................................................42
7.3 Functional testing… ..................................................................................... …….43
7.4 Acceptance testing… .....................................................................................……45
7.5 System testing……… .................................................................................... …...45

Chapter 8
Results
8.1 Final connection..................................................................................................47
8.2 User of the project .............................................................................................. 47

Conclusion ................................................................................................................................ 49
Future Enhancement ................................................................................................................49
References ..................................................................................................................................50

iii
LIST OF FIGURES

SL NO DESCRIPTION PAGE NO

Fig 4.1 Specification of Raspberry pi 18

Fig 4.2 SD card 19


Fig 4.3 GPIO connector on Raspberry Pi 20
Fig 4.5 Logitech Camera 23
Fig 4.6 Raspbian OS 24

Fig 4.7 Tesseract OCR 25

Fig 4.8 Open CV 26


Fig 4.9 Espeak 26
Fig 4.10 Xming 27
Fig 4.11 Putty 28
Fig 5.1 Architecture of the System 30

Fig 5.2 Data Flow Diagram of the System 31


Fig 5.3 Sequence Diagram 32
Fig 5.4 Use case Diagram of Text-To-Speech 33
Fig 5.5 Use case Diagram of Image/ Gesture-to-Speech 34
Fig 5.6 Use case Diagram of Speech-To-Text 34

Fig 6.1 Text-to-Speech 36


Fig 6.2 Image-to-Speech 37
Fig 6.3 Speech-to-text 38
Fig 7.1 Integration of components with Raspberry pi 43
Fig 7.2 Testing the connection with Raspberry pi 43

vii
LIST OF FIGURES

SL NO DESCRIPTION PAGE NO

Fig 7.3 Functional Testing of Xming 44

Fig 7.4 Functional Testing of Putty 45

Fig 8.1 Final connection 47

Fig 8.2 User of the project 48

viii
Chapter 1

INTRODUCTION
1.1 Introduction to Internet of Things
The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing
devices within the existing Internet infrastructure. Typically, IoT is expected to offer advanced
connectivity of devices, systems, and services that goes beyond machine-to-machine
communications (M2M) and covers a variety of protocols, domains, and applications. The
interconnection of these embedded devices (including smart objects), is expected to user in
automation in nearly all fields, while also enabling advanced applications like a Smart Grid.
Things, in the IoT, can refer to a wide variety of devices such as heart monitoring implants, biochip
transponders on farm animals, electric clams in coastal waters, automobiles with built-in sensors,
or field operation devices that assist fire-fighters in search and rescue. Current market examples
include thermostat systems and washer/dryers that utilize wifi for remote monitoring.

According to Gartner, Inc. (a technology research and advisory corporation), there will be
nearly 26 billion devices on the Internet of Things by 2020. ABI Research estimates that more than
30 billion devices will be wirelessly connected to the Internet of Things(Internet of Everything) by
2020. As per a recent survey and study done by Pew Research Internet Project, a large majority of
the technology experts and engaged Internet users whoresponded83 percent agreed with the notion
that the Internet/Cloud of Things, embedded and wearable computing (and the corresponding
dynamic systems) will have widespread and beneficial effects by 2025. It is, as such, clear that the
IoT will consist of a very large number of devices being connected to the Internet. Integration with
the Internet implies that devices will utilize an IP address as a unique identifier. However, due to
the limited address space of IPv4, objects in the IoT will have to use IPv6 to accommodate the
extremely large address space required. Objects in the IoT will not only be devices with sensory
capabilities, but also provide actuation capabilities (e.g., bulbs or locks controlled over the
Internet).

To a large extent, the future of the Internet of Things will not be possible without the
support of IPv6 and consequently the global adoption of IPv6 in the coming years will be critical
for the successful development of the IoT in the future. The embedded computing nature of
AI Assisted Eye For The Blind

many IoT devices means that low-cost computing platforms are likely to be used. In fact, to
minimize the impact of such devices on the environment and energy consumption, low-power radios
are likely to be used for connection to the Internet. Such low-power radios do not use Wifi, or well
established Cellular Network technologies, and remain an actively developing research area.

However, the IoT will not be composed only of embedded devices, since higher order
computing devices will be needed to perform heavier duty tasks (routing, switching, data processing,
etc.). Companies such as Free Wave Technologies have developed and manufactured low power
wireless data radios (both embedded and standalone) for over 20years to enable Machine-to-Machine
applications for the industrial internet of things. Besides the plethora of new application areas for
Internet connected automation to expand into, IoT is also expected to generate large amounts of data
from diverse locations that is aggregated and very high-velocity, thereby increasing the need to better
index, store and process such data. Diverse applications call for different deployment scenarios and
requirement, which have usually been handled in a proprietary implementation. However, since the IoT
is connected to the Internet, most of the devices comprising IoT services will need to operate utilizing
standardized technologies. Prominent standardization bodies, such as the IETF, IPSO Alliance and
ETSI, are working on developing protocols, systems, architectures and frameworks to enable the IoT.

1.2 Functioning of IoT


The Internet of Things is the expansion of the current Internet services so as to accommodate each
and every object which exists in this world or likely to exist in the coming future. This article
discusses the perspectives, challenges and opportunities behind a future Internet that fully supports
the “things”, as well as how the things can help in the design of a more synergistic future Internet.
Things having identities and virtual personalities operating in smart spaces using intelligent
interfaces to connect and communicate within social, environmental, and user contexts. There are
several fuzziness about the concept of Internet of Things such as IoT can be broken in two parts
Internet and Things.

The worldwide network of interconnected computer networks based on a standard


communication protocol, the Internet suite (TCP/IP) while a things is an object not precisely
identifiable. The world around us is full of objects, smart objects and the existing service

Dept. Of CSE, SJBIT 2024 - 25 Page 2


AI Assisted Eye For The Blind

provider known as Internet. The convergence of the sensors like smart objects, RFID based sensor
networks and Internet gives rise to the Internet of Things. With increased usage of sensors the raw
data as well as distributed data is increasing. Smart devices are now connected to Internet using
their communication protocol and continuously collecting and processing the data.

Ubiquitous computing which was thought as a difficult task has now become a reality due
to advances in the field of Automatic Identification, wireless communications, distributed
computation process and fast speed of Internet. From just a data perspective the amount of data
generated, stored and processed will be enormous. We focused on making this architecture as a
sensor based architecture where each sensor node will be as important as the sensor network itself.
Visualizing each sensor as having intelligence is the ultimate aim of any architecture in the IoT
domain. There is a lot of pervasive presence in the human environment of things or objects,
described general overview of internet evolution with several IoT services with the use of radio-
frequency identification (RFID) tags, sensors, actuators, mobile phones, smart embedded devices,
etc. which, through unique addressing schemes, are able to effectively communicate and interact
with each other and work together to reach a common goal of making the system easier to operate
and utilize. The objects that will be connected will be adaptive, intelligent, and responsive.

1.3 Future Vision of IoT


The Internet of Things is a vision which is under development and there can be many stakeholders
in this development depending upon their interests and usage. It is still in nascent stages where
everybody is trying to interpret IoT in with respect to their needs. Sensor based data collection,
data management, data mining and World Wide Web is involved in the present vision. Of course
sensor based hardware is also involved. A simple and broad definition of the internet of things and
the basic idea of this concept is the pervasive presence around us of a variety of things or objects
such as Radio Frequency Identification (RFID) tags, sensors, actuators, mobile phones, etc. which,
through unique addressing schemes, are able to interact with each other and cooperate with their
neighbours to reach common goals.
There are two particular visions.
 Things Oriented Vision
 Internet Oriented Vision

Dept. Of CSE, SJBIT 2024 - 25 Page 3


AI Assisted Eye For The Blind

Things Oriented Vision: This vision is supported by the fact that we can track anything using
sensors and pervasive technologies using RFID. The basic philosophy is uniquely identifying any
object using specifications of Electronic Product Code (EPC) .This technique is extended using
sensors. It is important to appreciate the fact that future vision will depend upon sensors and its
capabilities to fulfill the “things” oriented vision. We will be able to generate the data collectively
with the help of sensors, and sensor type embedded system. The summarized vision will be
dependent upon sensor based networks as well as RFID-based Sensor Networks which will take
care of the integration of technology based on RFID and sophisticated sensing and computing
devices and the global connectivity.

Internet Oriented Vision: The internet-oriented vision has pressed upon the need to make smart
objects which are connected. The objects need to have characteristics of IP protocols as this is one
of the major protocols being followed in the world of Internet. The sensor based object can be
converted in to an understandable format, which can be identified uniquely and its attributes can be
continuously monitored. This makes the base for smart embedded objects which can be assumed
to be a microcomputers having computing resources.

1.4 IoT: Services and Applications


Let us look into the possible set of future possibilities which we can have a rewarding applications.
Some of the attributes which can be considered while developing application is highlighted in
which says the network availability, bandwidth, area of coverage, redundancy, user involvement
and impact analysis mainly focuses on the properties of the RFID, sensors and 6lowpan
communication networks based IoT services. The basis of this tracking is indeed RFID tags which
are placed on object, human beings, animals, logistics etc. RFID tag reader may be used in all the
intermediate stages for tracking anything which has the RFID tag in it. This object position
identification can be smartly used to trigger an alarm, event or a specific inference regarding a
specific subject. Smart Environment and Enterprise Collection in any work environment an
enterprise based application can come up with the fact that it is based on smarter environment.
Here the individual or the enterprise may give data to outside world on its own discretion. Smart
embedded sensor technology can be used in order to monitor and transmit critical parameters of
the environment. Common attributes of the environment are temperature, humidity, pressure etc.
Smart monitoring of soil parameters can allow informed decision making about agriculture and
increase production of food grains and prevent loss of crops.

Dept. Of CSE, SJBIT 2024 - 25 Page 4


AI Assisted Eye For The Blind

1.5 Problem Statement


One of the most precious gifts to a human being is an ability to see, listen, speak and respond
according to the situations. But there are some unfortunate ones who are deprived of this.
Communication between deaf-dumb and normal person have been always a challenging task. The
proposed device is an innovative communication system frame working a single compact device.
We provide a technique for a blind person to read a text and it can be achieved by capturing an
image through a camera which converts a text to speech (TTS). It provides a way for the deaf
people to read a text by speech to text (STT) conversion technology. Also, it provides a technique
for dumb people using text to voice conversion and the gestures made by them can be converted to
text. Tesseract OCR (Online Character Recognition) is used to read the word for blind, the dumb
people can communicate their message through text and gestures which will be read out by Espeak,
the deaf people can be able to understand others speech through text.

1.6 Goals And Objective Of The Project


The main goal of our project is to provide a standard lifestyle for deaf dumb and blind people as
normal ones. Through this device the visually impaired people can able to understand the words
easily. The vocally impaired people can communicate their message through text and gestures.
The deaf people can able to understand others speech from the text that is displayed. This helps
them to experience the Independent life. Resources are the people who select expert workforce of
an organization or business sector. There is a lot of pervasive presence in the human environment
of things or objects, described general overview of internet evolution with several IoT services
with the use of radio-frequency identification (RFID) tags, sensors, actuators, mobile phones,
smart embedded devices, etc. which, through unique addressing schemes, are able to effectively
communicate and interact with each other and work together to reach a common goal of making
the system easier to operate and utilize. The objects that will be connected will be adaptive,
intelligent, and responsive.

Dept. Of CSE, SJBIT 2024 - 25 Page 5


Chapter 2
LITERATURE SURVEY
2.1. Design of Smart e-Tongue for the Physically Challenged People. [1]
Year: 2013
Author: M Delliraj
Problem Addressed:
For the deaf and dumb people the method of communication is sign language, which comprises of
hand gestures and face expression for each alphabet, numbers and words. Learning sign language
is like learning any other language. The sign used for different gesture may appear similar to each
other and it can be differentiated by looking at the angle, space between finger, number of fingers
opened, number of closed or semi closed, etc. Every deaf and dumb person is aware of these signs
but the normal person does not bother to learn their language. The major challenge these people
are facing is communication with the external world.

Methodology:
To overcome this communication barrier we propose a smart module which will convert the
signs in hand gesture and face expression into text, readable by normal people. The two different
approaches are used one using sensor and other using image processing. We will use image
processing technique in our work. A complete solution for conversion of ISL to text will be
implemented and an algorithm will be designed and implemented to convert ISL to text at real
time.

Drawbacks:
 Insufficient Datasets typically refers to classification tasks where the classes are not
.represented equally
 It is imperative to choose the evaluation metric of your model correctly. If it is not done,
.then you might end up adjusting/optimizing a useless parameter
 The number of fraudulent transactions as compares to the number of non-fraudulent
.transactions will be very much less. And this is where the problem arises

2.2. MyVox—Device for the communication between people: blind, deaf, deaf-
blind and unimpaired [2]
AI Assisted Eye For The Blind

Year: 2014
Author: Fernando Ramirez-Garibay, Cesar Millan Ollivarria
Problem Addressed:
A deaf-blind person is one with impaired senses of hearing and sight. Those who have lost only
one of these senses can use the rest to compensate up to a certain point, and can develop techniques
to successfully communicate with others. Blind people can hear and speak, deaf people can learn
how to write, understand sign language, and read lips. deaf-blind people, however, left mainly with
touch and smell to connect with the world around them, do not have direct access to the main
communication channels that others use in their every day life.

Methodology:
Have presented a device for the communication of deaf-blind people. While their lack of hearing
and sight could represent a limitation when communicating with others, technology is presented
that can be of use for communicating with others who do not speak sign language. The
communication device, named MyVox, has proven to be a useful tool for an Usher syndrome
patient who is now able to communicate with others without the need of an interpreter. Based on
his feedback, are developing an upgraded system that will also be tried by a larger population of
deaf-blind users.

Drawbacks:
 No internet accessibility, portability refers Individuals with disabilities such as visual, auditory,
or speech impairments has difficulty in communicating as others have limited or no knowledge
of Braille and sign language.
 BDD does not assist is a low-cost portable device which aims to solve this problem by acting
as a mediator to bridge this communication gap and reduce the dependence on expert human
translators.
 There is no LAN connection for internet access for using the espeak.

2.3 PiCam:IoT based Wireless Alert System for Deaf and Hard of Hearing[3]
Year: 2015
Author: P. Kumari, S.R.N Reddy
Problem Addressed:
Hearing loss presents many everyday challenges. Communication may be the biggest challenge

Dept. Of CSE, SJBIT 2024 - 25 Page 7


AI Assisted Eye For The Blind

of all getting and giving information, exchanging ideas, sharing feeling-whether in one-to-one
contact or in groups. Sometimes there are small disruptions of daily life that result from reduced
hearing.

Methodology:
Micro array camera image, Super-Resolution with Neural Network and Fusion of V-system. The
objective of this paper is to design and implement a low cost stand-alone device for deaf people to
notify doorbell ringing who live alone in their house. The system is based on Raspberry pi which
includes camera, vibrator, wireless GSM and Bluetooth. When the visitor presses the doorbell,
captured image is transferred to the wearable device which helps to know the right person at the
door or intruder. After transferring image, wearable device vibrates to notify. Also, the message is
sent to the owner through GSM. Visitor’s image along with the date and time is sent to the server
for retrieving information later. The system is reliable, effective, and easy to use and also enhances
the security of the user.

Drawbacks :
 Image received on wearable device through Bluetooth which create disturbance in the image,
the first step towards image processing is to acquire the image.
 The acquired image that is stored in the system windows needs to be connected to the software
automatically. This is done by creating an object
 Image acquisition devices typically does not support multiple video formats also simple images
as it has disturbance and also if there is no clear background leads to disturbance in image
capture.

2.4. Design of a communication aid for physically challenged [4]


Year: 2015
Author: R Suganya, T. Meeradevi
Problem Statement:
Communication is a process of conveying of an information, feelings, thoughts and ideas with
others in the form of verbal or non-verbal. Disabled people like deaf, dumb and the blind are far
from communication because of lack of amenities. Deaf, Dumb people communicate by the usage
of sign language but the blind cannot. To overcome this for the disabled to communicate here
develops a communication aid for the disabled people which will helps them easily to

Dept. Of CSE, SJBIT 2024 - 25 Page 8


AI Assisted Eye For The Blind

communicate among them at the same time with others. As here the communication for the
disabled people based on the regional sign language.

Methodology:
The communication is held by means of a glove based deaf- mute communication interpreter
method. The glove is inside fitted out with flex sensors, tactile sensors and accelerometer. The
gestures are converted into text display. The design of communication aid for disabled, prototype
and testing of a portable keyboard and speaker device with a braille refreshable display for
communication between two people. The communication between the deaf and dumb people is
carried out by using gesture based device for deaf and dumb person which changes sign language
to text display for voice to voiceless people with the help of smart gloves.

Drawbacks:
 Communication gap ,low sign recognition, defining accurate hand shape information is one
of the most crucial tasks in gesture controlled computer applications.
 Specifically in SLR systems, the gesture numbers are large and that makes the hand shape
definition more difficult.
 If 2D images are captured by a single camera that makes extraction of the hand shapes from
the video more complex also leads to the communication gap.

2.5 Implementation Of Gesture Based Voice And Language Translator For


Dumb People.[5]
Year: 2016
Author: L. Anusha, Y. Usha Devi
Problem Addressed : Dumb persons communicate through gestures which are not understood by
the majority of people. Gesture is a movement of part of the body, especially a hand or the head,
to express an idea or meaning.

Methodology:
Dual channel ADC, this paper proposes a system that converts gestures given by the user in the
form of English alphabets into corresponding voice and translates this English voice output into
any other Microsoft supported languages. The system consists of MPU6050 for sensing gesture
movement, Raspberry pi for processing, three button Keypad and speaker. It is implemented by
using trajectory recognition algorithm for recognizing alphabets. Raspberry pi generates voice

Dept. Of CSE, SJBIT 2024 - 25 Page 9


AI Assisted Eye For The Blind

output for the text in multiple languages using voice RSS and Microsoft translator. When tested,
the system recognized A-Z alphabets and generated voice output based on the gestures in multiple
languages.

Drawbacks
* High cost of flex sensors.
* Understanding and required prior knowledge.
* High initial investment.
* Does not applicable in night time.

u
2.6 Intelligent Gesture Recognition System for Deaf and Dumb Individuals [6]
Year: 2018
Author: A. Singh, P. Patel
Problem Addressed:
Deaf and dumb individuals often face difficulties in communicating with those who do not
understand sign language. The lack of an interpreter creates a significant barrier to interaction.
Methodology:
This study proposed a wearable gesture recognition system using flex sensors embedded in
gloves. The sensors captured hand movements and transmitted data to a microcontroller,
which translated the gestures into text and voice using a trained machine learning model. The
system was evaluated for multiple gestures, achieving a recognition accuracy of 92% for 26
ASL alphabets.
Drawbacks:
 Limited to predefined gestures and alphabets.
 Struggles with complex phrases requiring multi-gesture inputs.

Dept. Of CSE, SJBIT 2024 - 25 Page 10


AI Assisted Eye For The Blind

2.7 Voice-Assisted Smart Glasses for the Blind [7]


Year: 2019
Author: J. Brown, L. Chen
Problem Addressed:
Blind individuals often rely on tactile or auditory cues to interact with their environment. This
dependency limits their access to visual information.
Methodology:
The system used smart glasses equipped with a camera and a built-in processor to capture
images and perform object recognition in real-time. Text-to-speech (TTS) was implemented to
convey detected objects and text to the user via a bone-conduction headset. The glasses were
lightweight and portable, making them suitable for everyday use.
Drawbacks:
 .Limited field of view for the camera
 .High computational requirements constrained battery life

2.8 Sign Language Translation via Neural Networks [8]


Year: 2020
Author: S. Kumar, V. Gupta
Problem Addressed:
Existing sign language translation systems fail to interpret complex, dynamic gestures accurately
due to reliance on rule-based algorithms.
Methodology:
A deep convolutional neural network (CNN) was used to analyze video inputs of sign language
gestures. The model was trained on a large dataset of continuous sign language sequences. A
Natural Language Processing (NLP) module translated recognized gestures into grammatically
correct sentences for text and voice output.
Drawbacks:
 .Requires high-resolution video input for optimal performance
 .Computationally intensive, limiting real-time deployment on edge devices

Dept. Of CSE, SJBIT 2024 - 25 Page


11
AI Assisted Eye For The Blind

2.9 IoT-Enabled Alert System for Hearing Impaired Individuals [9]


Year: 2021
Author: M. Lopez, A. Torres
Problem Addressed:
Hearing-impaired individuals miss critical auditory cues, such as fire alarms or doorbells,
which could pose safety risks.
Methodology:
The proposed system consisted of IoT-enabled sensors placed throughout a home
environment. These sensors detected specific sounds and sent alerts to a wearable device via
Bluetooth. The wearable device vibrated and displayed a visual notification, enabling the user
to respond to the event promptly.
Drawbacks:
 .Limited to predefined sound profiles
 Requires continuous internet connectivity for advanced features

2.10 Low-Cost Braille e-Reader for the Visually Impaired [10]


Year: 2022
Author: K. Nakamura, H. Singh
Problem Addressed:
Braille books are bulky and expensive, limiting access to reading material for visually
impaired individuals.
Methodology:
The e-reader used a refreshable Braille display controlled by a Raspberry Pi microcontroller.
Text files were converted into Braille patterns using an open-source library. The system was
designed to be portable and affordable, making it accessible for widespread use.
Drawbacks:
 .Limited to text-based files; does not support image or graphical content
 .Small display size restricts the amount of text shown at a time

Dept. Of CSE, SJBIT 2024 - 25 Page 12


Chapter 3
OBJECTIVES AND METHODOLOGY
3.1 Objectives of the proposed system
 The visually impaired people can able to understand the words easily by Tesseract software.
 The vocally impaired people can communicate their message through text which can be read
out by espeak.
 The deaf people can able to hear others speech from text.

The main goal of our project is to provide a standard lifestyle for deaf dumb and blind
peoples as normal ones. Through this device the visually impaired people can able to understand
the words easily. The vocally impaired people can communicate their message through text and
gestures. The deaf people can able to understand others speech from the text that is displayed. This
helps them to experience the Independent life.
3.1.1 Advantages of the proposed system
 All-in-one device, where the deaf, dumb and blind can overcome their disabilities and
can express their views to others.
 Voice to text Conversion for Deaf People
 Image to Voice Conversion for Blind people.
 Sign to text and text to voice conversion for dumb people communicating to normal
person.
 It is a portable device with a very low.

3.1.2 Drawbacks of the proposed system


 The system is limited to gestures.
 Only static images are taken for text conversion.
 Limited for certain languages.

3.2 Methodology of the proposed system

i. RGB Image to Grayscale:


An RGB image can be viewed as three images( a red scale image, a green scale image and a blue
scale image) stacked on top of each other. In MATLAB, an RGB image is basically a M*N*3
AI Assisted Eye For The Blind

array of colour pixel, where each colour pixel is a triplet which corresponds to red, blue and green
colour component of RGB image at a specified spatial location. Similarly, A Grayscale image can
be viewed as a single layered image. In MATLAB, a grayscale image is basically M*N array whose
values have been scaled to represent intensities.

In MATLAB, there is a function called rgb2gray() is available to convert RGB image to


grayscale image. Here we will convert an RGB image to grayscale image without using rgb2gray()
function. Our key idea is to convert an RGB image pixel which a triplet value corresponding to
red, blue and green colour component of an image at a specified spatial location to a single value
by calculating a weighted sum of all three colour component.

Algorithm for conversion:


 Read RGB colour image into MATLAB environment.
 Extract Red, blue and green colour components from RGB image into 3 different 2-D matrices
 Create a new matrix with the same number of rows and columns as RGB image, containing all
zeros.
 Convert each RGB pixel values at location (i, j) to grayscale values by forming a weighted sum
of the Red, Green, and Blue colour components and assign it to corresponding location (i, j) in
new matrix.

ii. Espeak synthesizer:


eSpeak is a compact multi-platform multi-language open source speech synthesizer using a (format
synthesis method. Informant synthesis, voice speech ( vowels and sonorants consonants) is created
by using formants. Unvoiced consonants are created by using pre-recorded sounds. Voiced
consonants are created as a mixture of a formant-based voiced sound in combination with a pre-
recorded unvoiced sound. The eSpeak Editor allows to generate formant files for individual vowels
and voiced consonants, based on a sequence of key frames which define how the formant peaks
(peaks in the frequency spectrum) vary during the sound. A sequence of formant frames can be
created with a modified version of Praat, a free scientific computer software package for the
analysis of speech in phonetics. The Praat formant frames, saved in a spectrum.dat file, can be
converted to formant key frames with eSpeakEdit.

Dept. Of CSE, SJBIT 2024 - 25 Page 14


AI Assisted Eye For The Blind

Figure 3.1 Block diagram for Raspberry pi

iii. Tesseract OCR:


It is an optical character recognition engine for various operating systems. Tesseract up to and
including version 2 could only accept TIFF images of simple one-column text as inputs. These
early versions did not include layout analysis, and so inputting multi-columned text, images, or
equations produced garbled output. Since version 3.00 Tesseract has supported output text
formatting, hOCR positional information and page-layout analysis. Support for a number of new
image formats was added using the Leptonica library. Tesseract can detect whether text is mono
spaced or proportionally spaced.

This algorithm is able to accurately decipher and extract text from a variety of sources!
As per it's namesake it uses an updated version of the tesseract open source OCR tool. We also
automatically binarize and preprocess images using the binarization so tesseract has an easier time
deciphering images. Not only are we able to extract English text, but tesseract supports over 100
other languages as well! Give it a try and don't forget to leave a comment if you like it or have a
suggestion/comment.

iv. Speechtexter:
Speechtexter is an online multi-language speech recognizer that can help you type long documents,
books, reports, blog posts with your voice. If you need help, please visit our help page at https://
www.speechtexter.com/help .This app supports over 60 different languages!
For better results use a high quality microphone, remove any background noise, and speak
.loudly and clearly
.It can create text notes/ sms/emails/tweets from users voice

Dept. Of CSE, SJBIT 2024 - 25 Page 15


Chapter 4

SYSTEM REQUIREMENTS

4.1 What is system analysis?


Analysis is the process of breaking a complex topic or substance into smaller parts to gain a
better understanding of it. Analysts in the field of engineering look at requirements, structures,
mechanisms, and systems dimensions. Analysis is an exploratory activity. The Analysis Phase is
where the project lifecycle begins. The Analysis Phase is where you break down the deliverables
in the high-level Project Charter into the more detailed business requirements. The Analysis
Phase is also the part of the project where you identify the overall direction that the project will
take through the creation of the project strategy documents.

Gathering requirements is the main attraction of the Analysis Phase. The process of gathering
requirements is usually more than simply asking the users what they need and writing their answers
down. Depending on the complexity of the application, the process for gathering requirements
has a clearly defined process of its own. This process consists of a group of repeatable processes
that utilize certain techniques to capture, document, communicate, and manage requirements.

4.2 Software Requirement Specification


A Software Requirements Specification (SRS) – a requirements specification for a software
system – is a complete description of the behavior of a system to be developed. In addition to a
description of the software functions, the SRS also contains non-functional requirements. Software
requirements are a sub-field of software engineering that deals with the elicitation, analysis,
specification, and validation of requirements for software.
4.2.1 Functional Requirement

 Device should do minimal computations on its own: Device should be able to do


computations without having too many logics. Its doesn’t need to depend on other external
factors. There are several ways in which multiplication seems to be more difficult and
each could read to a different formulation of the problem.
 Device should give vocal feedbacks to the user via headphones: This helps to
communicate as its more akin to real life communication. If the headset lacks this,
AI Assisted Eye For The Blind

your own speech is deafened by the fact you are wearing a headset.
 Device should be able to capture image and display it on the screen meanwhile the
voice should be generated: The image will be captured by the camera, using Tesseract
OCR it is converted to text and could be displayed on the screen. And even voice output
is given through headphones using espeak.
 Device should be able to convert the received speech-to-text:The input given as
speech through speaker of camera, it is converted to text using Speechtexter which is
available online.

4.2.2 Non Functional Requirement


 The camera is enabled during the image to speech conversion: During image capture
the camera will be automatically on. Whereas the voice input cannot be taken during this
time, and there is no need of keyboard.
 The images will be refreshed as the new image is captured: The image captured will be
automatically stored in file of SD Card. During each image capture the old images will be
refreshed and new images will be taken for process.

4.3 Hardware Components


 ARM11 Raspberry Pi 3 board
 Logitech Camera
 Headphones
 SD card
4.4 Software Components
 Raspbian OS
 Tesseract OCR
 Open CV
 Espeak
 Xming
 Putty

Dept. Of CSE, SJBIT 2024 - 25 Page 17


AI Assisted Eye For The Blind

4.5 Hardware Component Description


The most common set of requirements defined by any operating system or software application is
the physical computer resources, also known as hardware, a hardware requirements list is often
accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An
HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular
operating system or application. The requirements of the system are:

4.5.1 ARM11 Raspberry Pi 3 board


Pi is a credit-card sized computer that connects to a computer monitor or TV and uses input
devices like keyboard and mouse. It is capable of performing various functionalities such as
surveillance system, military applications, surfing internet, playing high definition videos,
live games and to make data bases.

Figure 4.1 Specification of Raspberry pi

The device is implemented using a Raspberry pi 3B board and their specifications are as
follows.

Dept. Of CSE, SJBIT 2024 - 25 Page 18


AI Assisted Eye For The Blind

a. Processor / SoC (System on Chip)


The Raspberry Pi has a Broadcom BCM2835 System on Chip module. It has a ARM1176JZF-
S processor. The Broadcom SoC used in the Raspberry Pi is equivalent to a chip used in an
old Smartphone (Android or iPhone). While operating at 700 MHz by default, the Raspberry
Pi provides a real world performance roughly equivalent to the 0.041 GFLOPS. On the CPU
level the performance is similar to a 300 MHz Pentium II of 1997- 1999, but the GPU,
however, provides 1 Gpixel/s, 1.5 Gtexel/s or 24 GFLOPS of general purpose compute and
the graphics capabilities of the Raspberry Pi are roughly equivalent to the level of performance
of the Xbox of 2001. The Raspberry Pi chip operating at 700 MHz by default, will not
become hot enough to need a heatsink or special cooling.

b. Power source
The Pi is a device which consumes 700mA or 3W or power. It is powered by a MicroUSB
charger or the GPIO header. Any good smartphone charger will do the work of powering the
Pi.

c. SD Card
The Raspberry Pi does not have any onboard storage available. The operating system is loaded
on a SD card which is inserted on the SD card slot on the Raspberry Pi. The operating system
can be loaded on the card using a card reader on any computer.

Figure 4.2 SD card

Dept. Of CSE, SJBIT 2024 - 25 Page 19


AI Assisted Eye For The Blind

d. GPIO – General Purpose Input Output


General-purpose input/output (GPIO) is a generic pin on an integrated circuit whose behavior,
including whether it is an input or output pin, can be controlled by the user at run time. GPIO
pins have no special purpose defined, and go unused by default. The idea is that sometimes
the system designer building a full system that uses the chip might find it useful to have a
handful of additional digital control lines, and having these available from the chip can save
the hassle of having to arrange additional circuitry to provide them.

Figure 4.3 GPIO connector on Raspberry Pi

A standard 3.5 mm TRS connector is available on the RPi for stereo audio output. Any
headphone or 3.5mm audio cable can be connected directly. Although this jack cannot be used
for taking audio input, USB mics or USB sound cards can be used.

f. Status LEDs
There are 5 status LEDs on the RPi that show the status of various activities as follows:

Dept. Of CSE, SJBIT 2024 - 25 Page 20


AI Assisted Eye For The Blind

“OK” - SDCard Access (via GPIO16) - labeled as "OK" on Model B Rev1.0 boards and

"ACT" on Model B Rev2.0 and Model A boards

“POWER” - 3.3 V Power - labeled as "PWR" on all boards

“FDX” - Full Duplex (LAN) (Model B) - labeled as "FDX" on all boards

“LNK” - Link/Activity (LAN) (Model B) - labeled as "LNK" on all boards

“10M/100” - 10/100Mbit (LAN) (Model B) - labeled (incorrectly) as "10M" on Model B


Rev1.0 boards and "100" on Model B Rev2.0 and Model A boards

Figure 4.4 Status LEDs

g. USB 2.0 Port


USB 2.0 ports are the means to connect accessories such as mouse or keyboard to the
Raspberry Pi. There is 1 port on Model A, 2 on Model B and 4 on Model B+. The number of
ports can be increased by using an external powered USB hub which is available as a standard
Pi accessory.

h. Ethernet
Ethernet port is available on Model B and B+. It can be connected to a network or internet
using a standard LAN cable on the Ethernet port. The Ethernet ports are controlled by
Microchip LAN9512 LAN controller chip.

Dept. Of CSE, SJBIT 2024 - 25 Page 21


AI Assisted Eye For The Blind

i. CSI connector
CSI–Camera Serial Interface is a serial interface designed by MIPI (Mobile Industry
Processor Interface) alliance aimed at interfacing digital cameras with a mobile processor.
The RPi foundation provides a camera specially made for the Pi which can be connected with
the Pi using the CSI connector.

j. JTAG headers
JTAG is an acronym for ‘Joint Test Action Group', an organization that started back in the
mid 1980's to address test point access issues on PCB with surface mount devices. The
organization devised a method of access to device pins via a serial port that became known as
the TAP (Test Access Port). In 1990 the method became a recognized international standard
(IEEE Std 1149.1). Many thousands of devices now include this standardized port as a feature
to allow test and design engineers to access pins.

k. HDMI
HDMI – High Definition Multimedia Interface
HDMI 1.3 a type A port is provided on the RPi to connect with HDMI screens.

Models

Attributes Model A Model B Model B+


Target price US$25 US$35

Broadcom BCM2835 (CPU, GPU, DSP, SDRAM, and single USB


SoC
port)
700 MHz ARM1176JZF-S core (ARM11 family, ARMv6
CPU
instruction set)
GPU Broadcom Video Core IV @ 250 MHz

Memory 256 MB (shared


512 MB (shared with GPU) as of 15 October 2012
(SDRAM) with GPU)
1 (direct from 2 (via the on-board 3- 4 (via the on-board 5- port
USB 2.0 ports BCM2835 chip) port USB hub) USB hub)
Video input 15-pin MIPIcamera interface (CSI) connector, used with the

Dept. Of CSE, SJBIT 2024 - 25 Page 22


AI Assisted Eye For The Blind

Video outputs Composite RCA (PAL and NTSC) –in model B+ via 4-pole 3.5 mm jack,
HDMI (rev 1.3 & 1.4), raw LCD Panels via DS
Onboard SD / MMC / SDIO card slot (3.3 V card MicroSD

Onboard None 10/100 Mbit/Ethernet (8P8C) USB adapter

Low-level 8× GPIO, UART, I²C bus, SPI bus with two 17× GPIO

Power ratings 300 mA (1.5 W) 700 mA (3.5 W) 600 mA (3.0 W)


Power source 5 V via MicroUSB or GPIO header

Size 85.60 mm × 56 mm (3.370 in × 2.205 in) – not including protruding


connectors
Weight 45 g (1.6 oz)

Table 4.1 Specifications of Raspberry Pi

4.5.2 Logitech Camera


It is a plug and play setup which is easy to apply. You can easily make video calls on major
IMs. It has a 5MP camera with high resolution. It has built in mikes with right sounds gives
you a clear conversation without any noise. XVGA video recording system has a reach of
about 1024x768 resolutions. In this project, we are using a Logitech camera which is
capturing the image and gesture control function.

Figure 4.5 Logitech Camera

Dept. Of CSE, SJBIT 2024 - 25 Page 23


AI Assisted Eye For The Blind

4.6 Software Component Description

4.6.1 Raspbian OS

Although the Raspberry Pi’s operating system is closer to the Mac than Windows, it’s the
latter that the desktop most closely resembles It might seem a little alien at first glance, but
using Raspbian is hardly any different to using Windows (barring Windows 8 of course).
There’s a menu bar, a web browser, a file manager and no shortage of desktop shortcuts of
pre-installed applications. Raspbian is an unofficial port of Debian Wheezy armhf with
compilation settings adjusted to produce optimized "hard float" code that will run on the
Raspberry Pi. This provides significantly faster performance for applications that make heavy
use of floating point arithmetic operations. All other applications will also gain some
performance through the use of advanced instructions of the ARMv6 CPU in Raspberry Pi.
Although Raspbian is primarily the efforts of Mike Thompson (mpthompson) and Peter Green
(plugwash), it has also benefited greatly from the enthusiastic support of Raspberry Pi
community members who wish to get the maximum performance from their device.

Figure 4.6 Raspbian OS

4.6.2 Tesseract OCR


Python Tesseract is an optical character recognition (OCR) engine for various OS. Tesseract

Dept. Of CSE, SJBIT 2024 - 25 Page 24


AI Assisted Eye For The Blind

OCR is the process of electronically extracting1text from images and1reusing it in a variety


of ways1such as document1editing, 1free-text1searches. OCR is a technology that is capable
converting documents such as scanned papers, PDF files and captured image into editable
data. Tesseract can be used for Linux, Windows and Mac OS. It can be used by programmers
to extract typed, printed text from images using an API. Tesseract can use GUI from available
3rd party page.
The installation process of tesseract OCR is a combination of two parts-The engine and
training data for a language. Tesseract can be obtained directly from many Linux distributers.
The latest stable version of tesseract OCR is 3.05.00. In our project Tesseract is used to convert
the captured image text into text format.
Tesseract Features:

 Page layout
analysis.

 More languages
are supported.

Figure 4.7 Tesseract OCR


4.6.3 Open CV
It is a library of programming1functions mainly aimed at real-time1computer vision. It is
developed by Intel research center and subsequently supported by1Willow Garage and now
maintained by itseez. It is written in C++ and its primary interface is also in C++. Its binding
is in Python, Java, and Mat lab. OpenCV runs on a variety of platform i.e.

Dept. Of CSE, SJBIT 2024 - 25 Page 25


AI Assisted Eye For The Blind

Windows, Linux, and MacOS, openBSD in desktop and Android, IOS and Blackberry in
mobile. It is used in diverse purpose for facial recognition, gesture recognition, object
identification, mobile robotics, segmentation etc. It is a combination of OpenCV C++ API and
Python language. In our project we are using OpenCV version 2 OpenCV is used to gesture
control to open a camera and capture the image. It is also used in the image to text and voice
conversion technique.

Figure 4.8 Open CV


4.6.4 Espeak
It is a compact open source software speech synthesizer for English and 11other languages for
Linux and Windows platform. It is used to convert text to voice. It supports many languages
in a small size. The programming for espeak software is done using rule files with feedback.
It supports SSML. It can be modified by voice variant. These are text files which can
change1characteristics such as1pitch range, add effects1such as echo, whisper and croaky
voice, or make systematic adjustments to formant frequencies to change the sound of the
voice. The default speaking speed of 180 words per minute is too fast to be intelligible. In our
project Espeak is used to convert the text to voice signal.

Figure 4.9 Espeak

Dept. Of CSE, SJBIT 2024 - 25 Page 26


AI Assisted Eye For The Blind

4.6.5 Xming
Xming provides the X Window System display server, a set of traditional sample X
applications and tools, and a set of fonts. It features support of several languages and has
Mesa 3D, OpenGL, and GLX 3D graphics extensions capabilities. The Xming X server is
based on Cygwin/X, the X.Org Server. It is cross-compiled on Linux with the MinGW
compiler suite and the Pthreads-Win32 multi-threading library. Xming runs natively on
Windows and does not need any third-party emulation software. Xming may be used with
implementations of Secure Shell (SSH) to securely forward X11 sessions from other
computers. It supports Putty and ssh.exe, and comes with a version of Putty’s plink.exe.
The Xming project also offers a portable version of Putty. When SSH forwarding is not used,
the local file Xn.hosts must be updated with host name or IP address of the remote machine
where GUI application is started. The software has been recommended by authors of books
on free software when a free X server is needed and described as simple and easier to install
though less configurable than other popular free choices like Cygwin/X.

Figure 4.10 Xming

Dept. Of CSE, SJBIT 2024 - 25 Page 27


AI Assisted Eye For The Blind

4.6.6 Putty
Putty is a secluded and open-source mortal emulator, serial comfort network file transfer
application. Putty was formulated for Microsoft Windows, but it has been ported to various
other operating systems. It can link up to a serial port. It backs up a variety of network
protocols, together with SCP, SSH, Telnet, and raw socket connection.

Figure 4.11 Putty

Dept. Of CSE, SJBIT 2024 - 25 Page 28


Chapter 5
SYSTEM ARCHITECTURE
5.1 High Level Design

5.1.1 System Architecture


System design is the process of defining the architecture, components, modules, interfaces
and data for a system to satisfy specified requirements. System design could see it as the
application of systems theory to product development. Theory is some overlap with the
disciplines of system analysis, systems architecture and systems engineering. If the broader
topic development “blends the perspective of marketing, design, and manufacturing into a
single approach to product development,” then design the act of talking the marketing
information and creating the design of the product to be manufactured. Systems design is
therefore the process of defining and developing systems to satisfy specified requirements of
the user.

Until the 1990s systems design had crucial and respected role in the data processing
industry. In the 1990s standardization of hardware and software resulted in the ability to build
modular systems. The increasing importance of software running on generic platforms has
enhanced the discipline of software engineering. Object-oriented analysis and design methods
are becoming the most widely used methods for computer systems design. The UML has
become the standard language in object-oriented analysis and design. It is widely used for
modelling software systems and is increasingly used for high designing non- software systems
and organizations.

System design is one of the most important phases of software development process.
The purpose of the design is to plan the solution of a problem specified by the requirement
documentation. In other words the first step in solution is the design of the project. The design
of the system is perhaps the most critical factor affecting the quality of the software. The
objective of the design phase is to produce overall design of the software. It aims to figure out
the modules that should be in the system to fulfill all the system requirements in efficient
manner. The design will contain the specification of all the modules, their interaction with
other modules and the desired output from each module. The output of the design process is a
description of the software architecture.
AI Assisted Eye For The Blind

Figure 5.1 Architecture of the System

System architecture is a conceptual model that defines the structure, behavior, and more
views of a system. An architecture description is a formal description and representation of a
system, organized in a way that supports reasoning about the structures and behaviors of the
system. The figure 5.1 shows a general block diagram describing the activities performed by
this project.

5.1.2 Data Flow Diagram


Data flow diagram (DFD) is a graphical representation of the flow of the visualization of data
processing. On a DFD, data items flow from an external data source or internal data source
to internal data source or external data sink via an internal process. DFD provides no

Dept. Of CSE, SJBIT 2024 - 25 Page 30


AI Assisted Eye For The Blind

information about the timing of process or about whether process will operate in sequence
or in parallel

Figure 5.2 Data Flow Diagram of the System

5.1.3 Sequence Diagram


A sequence diagram shows object interactions arranged in time sequence. It depicts the
objects and classes involved in the scenario and the sequence of messages exchanged
between the objects needed to carry out the functionality of the scenario. Sequence diagrams

Dept. Of CSE, SJBIT 2024 - 25 Page 31


AI Assisted Eye For The Blind

are typically associated with use case realizations in the Logical View of the system under
development. Sequence diagrams are sometimes called event diagrams or event scenarios.

Figure 5.3 Sequence Diagram

Firstly the user sends a file to gui::OCR (graphical user interface). The Optical Character
Recognition (OCR) identifies if there is any image , next it gets image then if any character is
identified then it is returned back to the user. Then the image is read using
reader::Image_reader and passed to OCR Engine. Next it identifies a character using
nextChar: Graphic_char function, after reading all the characters it is segmented then moved
to recognition function. Finally the EOF is reached and returned back to the user.

Dept. Of CSE, SJBIT 2024 - 25 Page 32


AI Assisted Eye For The Blind

5.1.4 Use Case Diagram


A use case diagram is a dynamic or behavior diagram in UML. Use case diagrams model the
functionality of a system using actors and use cases. Use cases are a set of actions, services,
and functions that the system needs to perform. In this context, a "system" is something being
developed or operated, such as a web site.

The "actors" are people or entities operating under defined roles within the system. The
“scenario” is a specific sequence of actions and interactions between actors and the system.”
Use case” is a collection of related success and failure scenarios, describing actors using the
system to support a goal.

There are 3 use case diagrams:


 Text-to-speech use
case
 Image/Gesture-

Figure 5.4 Use case Diagram of Text-To-Speech

The user writes words in a file, he/she uploads a file containing words. This is given as input
to the system. System converts words into speech output. The users can pause/stop/rewind
this speech or he can even re-listen to the speech as shown in figure 5.4.

Dept. Of CSE, SJBIT 2024 - 25 Page 33


AI Assisted Eye For The Blind

Figure 5.5: Use case Diagram of Image/ Gesture-to-Speech

The user captures the image/gesture from the Logitech camera. He/she can zoom-in or zoom-
out the image. This is given as input to the system, speech output is given. One can
pause/stop/rewind the speech or even can re-listen to specific word.

Figure 5.6 Use case Diagram of Speech-To-Text

The user gives the input through microphone and verifies voice perfection to the system.
The system gives text output to the user. This text gets printed on the system.

Dept. Of CSE, SJBIT 2024 - 25 Page 34


Chapter 6

IMPLEMENTATION
Implementation is the realization of an application, or execution of a plan, idea, model, design,
specification, standard, algorithm, or policy. In other words, an implementation is a realization
of a technical specification or algorithm as a program, software component, or other computer
system through programming and deployment. Many implementations may exist for a given
specification or standard.

Implementation is one of the most important phases of the Software Development Life
Cycle (SDLC). It encompasses all the processes involved in getting new software or hardware
operating properly in its environment, including installation, configuration, and running, testing,
and making necessary changes. Specifically, it involves coding the system using a particular
programming language and transferring the design into an actual working system. This phase of
the system is conducted with the idea that whatever is designed should be implemented; keeping
in mind that it fulfills user requirements, objective and scope of the system. The implementation
phase produces the solution to the user problem.

6.1 Overview of System Implementation


There could be many ways of implementing this project we have chosen Python because python
is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its
design philosophy emphasizes code readability, and its syntax allows programmers to express
concepts in fewer lines of code than would be possible in languages such as C++ or Java. The
language provides constructs intended to enable clear programs on both a small and large scale.

Python supports multiple programming paradigms, including object-oriented, imperative


and functional programming or procedural styles. It features a dynamic type system and
automatic memory management and has a large and comprehensive standard library. Python, the
reference implementation of Python, is free and open-source software and has a community-
based development model, as do nearly all of its alternative implementations. Python is managed
by the non-profit Python Software Foundation.
AI Assisted Eye For The Blind

The Project is divided into 4 different modules:


1. Text-to-Speech (TTS)
2. Image-to-Speech using camera (ITSE)
3. Gesture-to-Speech (GTS)
4. Speech-to-Text (STT)

6.2 Module Description:


A module description provides detailed information about the module and its supported
components, which is accessible in different manners. The included description is available by
reading directly, by generating a short html-description, or by making an environment check for
supported components to check if all needed types and services are available in the environment
where they will be used. This environment check could take place during registration/installation
or during a separate consistency check for a component.

6.2.1 Text-to-speech (TTS)


The first process text to speech conversion is done for the dumb masses who cannot speak.
The Dumb people convert their thoughts to text which could be transferred to a voice signal.
The converted voice signal is spoken out by espeak synthesizer. After selecting the option
OP1 the OS and sub process imported. Call text to speech function and enter the text as input.
After entering the text from keyboard, the espeak synthesizer converts text to speech. The
process also provided with the keyboard interrupt ctrl+C.

Figure 6.1 Text-to-Speech

Dept. Of CSE, SJBIT 2024 - 25 Page 36


AI Assisted Eye For The Blind

6.2.2 Image-to-speech using camera (ITSC)


The second process is developed for blind people who cannot read normal text. In order to
help blind people, we have interfaced the Logitech camera to capture the image by using
OPENCV tool. The captured image is converted to text using Tesseract OCR and save the
text to file out.txt. Open the text file and split the paragraph into sentences and save it. In
OCR, the adaptive thresholding techniques are used to change the image into binary images
and they are transferred to character outlines. The converted text is read out by the espeak.

Figure 6.2 Image-to-Speech

6.2.3 Gesture-to-speech (GTS)


The third process is developed for the vocally impaired people who cannot exchange the
thoughts to the normal people. Dumb people use gesture to communicate with normal people
which are majorly not understandable by normal people. The process starts with the capturing
of image and crops the useful portion. Convert the RGB image into gray scale image for better
functioning, Blur the cropped image through Gaussian blur function and pass it to the threshold
function to get the highlighted part of the image. Find the contours and an angle between two
fingers. By using convex hull function, we can implement the finger point. Count the number
of angles which is less than 90 degree which gives the

Dept. Of CSE, SJBIT 2024 - 25 Page 37


AI Assisted Eye For The Blind

number of defects. According to the number of defects, the text is printed on display and read
out by the Speaker.
6.2.4 Speech-to-Text (STT)

The fourth process is developed for the hearing impairment, people who cannot understand
the words of normal people. In order to help them, our project is provided with a switch which
is used to convert the voice of the normal people text. We have used a chromium browser
which is automatically connected to URL speechtexter.com. The process is performed by
assigning a minimum threshold voltage to recognize the voice signal. The input is given
through a microphone which is converted into a text format. The URL supports a variety of
languages. If the voice signals recognizable it will print the text else it gives the error signal.

6.3 Pseudo code

Figure 6.3 Speech-to-text


Pseudo code is an informal high-level description of the operating principle of a computer program
or other algorithm. It uses the structural conventions of a programming language, but is intended
for human reading rather than machine reading. Pseudo code typically omits details that are not
essential for human understanding of the algorithm, such as variable declarations, system-specific
code and some subroutines. The programming language is augmented with natural language
description details, where convenient, or with compact mathematical notation. The purpose of
using pseudo code is that it is easier for people to understand than conventional

Dept. Of CSE, SJBIT 2024 - 25 Page 38


AI Assisted Eye For The Blind

programming language code, and that it is an efficient and environment- independent description
of the key principles of an algorithm. It is commonly used in textbooks and scientific publications
that are documenting various algorithms, and also in planning of computer program development,
for sketching out the structure of the program before the actual coding takes place.

No standard for pseudo code syntax exists, as a program in pseudo code is not an
executable program. Pseudo code resembles, but should not be confused with skeleton programs,
including dummy code, which can be compiled without errors. Flowcharts and Unified Modeling
Language (UML) charts can be thought of as a graphical alternative to pseudo code, but are more
spacious on paper.
The Project is divided into 4 different modules:
(Image-to-Speech using camera (ITSE



6.3.1 Image-to-speech using camera (ITSC)
Step 1: Start
Step 2: Choose option OP2 to convert image to speech Step 3: Call the function Image-to-
Speech ().
Step 4: Capture the required image.
Step 5: Convert image to text using Tesseract OCR. Step 6: Split the text into paragraph.
Step 7: Text is displayed on the screen.
Step 8: Next, call TexttoSpeech () function.

Step 9: Convert text to speech using e-speak


synthesizer. Step 10: Voice is generated.
Step 11: Stop

Dept. Of CSE, SJBIT 2024 - 25 Page 39


Chapter 7
TESTING
7.1 Unit testing

Unit testing is a level of software testing where individual units/ components of a software are
tested. The purpose is to validate that each unit of the software performs as designed. A unit is the
smallest testable part of any software. It usually has one or a few inputs and usually a single output.
Sr. Communication Sending Message By Reply to be send by B Result
No Case A

1. Normal person A to Audio message will be Audio message will be Passed


Blind Person B sent by A sent by B

2. Normal person A to Audio message will be Text message by B to A Passed


dumb person B sent by A converted to audio or text

3. Normal person A to Audio message will be Message will be sent in Passed


blind and dumb sent by A form of Image by A and
person B converted into text
message

4 Normal person A to Audio message will be Text message by B to A Passed


deaf and dumb sent by A converted to converted to audio or text
person B text.

5. Blind person A to Audio message will be Audio message will be Passed


blind person B sent by A sent by B

6 Blind person A to Audio message will be Text message by B to A Passed


blind person B sent by A converted to audio or text

7 Blind person A to Audio message will be Message will be sent in Passed


dumb person B sent by A converted to form of Image by A and
text. converted into text
message

8. Dumb person A to Audio message will be Audio message will be Passed


blind person B sent by A converted to sent by B
audio message
AI Assisted Eye For The Blind

9. Dumb person A to Audio message will be Text message by B to A Passed


dumb person B sent by A converted to converted to audio or text
audio message

10. Dumb Audio message will be Text message by B to A Passed


personA to deaf and sent by A converted to audio or text
dumb person B

11. Deaf Audio message will be Audio message will be Passed


and dumb person A to sent by A converted to sent by B
blind person B audio message

12. Deaf and Audio message will be Text message by B to A Passed


dumb person A to deaf sent by A converted to audio or text
and bl ind person B

13. Deaf and Audio message will be Text message by B to A Passed


dumb person A to dumb sent by A converted to audio or text
personB

14. Blind Message will be sent Audio message will be Passed


and dumb person A to in form of Image by A sent by B
blind person B

15. Blind Message will be sent Text message by B to A Passed


and dumb person A to in form of Image by A converted to audio or text
dumb person B and converted into text
message.

16. Blind and Message will be sent Text message by B to A Passed


dumb person A to deaf in form of Image by A converted to audio or text
and dumb person B and converted into text
message.

17. Blind and Message will be sent Audio message will be Passed
dumb person A to normal in form of Image by A sent by B
personB and converted into text
message.

18. Deaf and Audio message will be Audio message will be Passed
dumb person A to normal sent by A sent by B and converted
personB into text message.

Dept. Of CSE, SJBIT 2024 - 25 Page 41


AI Assisted Eye For The Blind

19. Blind and deaf and Message will be sent Message will be sent in Passed
dumb person A to in form of Image by A form of Image by B.
blind person B

20. Blind and deaf and Message will be sent Text message by B to A Passed
dumb person A to in form of Image by A converted to audio or text
deaf and dumb and converted into text
person B message.

21. Blind and deaf and Message will be sent Text message by B to A Passed
dumb person A to in form of Image by A converted to audio or text
dumb person B and converted into text
message.

22. Blind and deaf and Message will be sent Text message by B to A Passed
dumb person A to in form of Image by A converted to audio or text
normal person B and converted into text
message.

23. Blind and deaf and Message will be sent Message will be sent in Passed
dumb person A to in form of Image by form of Image by B.
blind and dumb A.
person B

24. Blind and deaf and Message will be sent Message will be sent in Passed
dumb person A to in form of Image by form of Image by B.
Blind and deaf and A.
dumb person B

Table 7.1 Unit Testing


7.2 Integration testing

An integration test checks how two different components or subsystems interact with each other.
Like a unit test, it generally checks for a specific response to a particular input. Integration testing
takes as its input modules that have been unit tested, groups them in larger aggregates, applies
tests defined in an integration test plan to those aggregates, and delivers as its output the
integrated system ready for system testing.

In the project integrating camera, headphones and SDcard with power supply are needed.
Some of the USB ports are used if necessary along with the Ethernet cable.

Dept. Of CSE, SJBIT 2024 - 25 Page 42


AI Assisted Eye For The Blind

Figure 7.1 Integration of components with Raspberrypi

7.3 Functional Testing


A functional test is a form of integration test in which the application is run "literally". You
would have to make sure that an email was actually sent in a functional test, because it tests
your code end to end. It is often considered best practice to write each type of tests for any given
codebase. Unit testing often provides the opportunity to obtain better "coverage": it's usually
possible to supply a unit under test with arguments and/or an environment which causes all of its
potential code paths to be executed.

This is usually not as easy to do with a set of integration or functional tests, but integration and
functional testing provides a measure of assurance that your "units" work together, as they will be
expected to when your application is run in production.

Figure 7.2 Testing the connections with Raspberrypi

Dept. Of CSE, SJBIT 2024 - 25 Page 43


AI Assisted Eye For The Blind

Functional tests are typically written using the Web Test package, which provides APIs for
invoking HTTP(S) requests to your application. We also use py.test and pytest- cov to provide
simple testing and coverage reports. The functional tests used in the project are mentioned below.

XMing is a free X window server for Microsoft Windows. It allows one to use Linux
graphical applications remotely. It is fully featured lean, fast, simple to install and because it is
standalone native Windows, easily made portable. Xming is totally secure when used with SSH,
its installers include executable code. PuTTY is Project Xming’s preferred and integrated X
terminal emulator.

Figure 7.3 Functional Testing of Xming

Dept. Of CSE, SJBIT 2024 - 25 Page 44


AI Assisted Eye For The Blind

Figure 7.4 Functional Testing of Putty

7.4 Acceptance testing

Acceptance Testing is often the final step before rolling out the application. Usually the end users
who will be using the applications test the application before ‘accepting’ the application. This type
of testing gives the end users the confidence that the application being delivered to them meets
their requirements.

7.5 System testing


System testing is a level of software testing where a complete and integrated software is tested.
The purpose of this test is to evaluate the system's compliance with the specified requirements.
Aim for developing the prototype model for blind dumb and deaf people by employing in a single
compact device. The device provides a unique solution for these people to manage their sites by
themselves. The project is concerned with the source code of Python. It is the easiest programming
language to interface with the Raspberry Pi. The system is provided with 4 Options. Each option
has different functions. We have chosen the options for necessary conversion.
1) Text to speech (TTS) using (option1)
2) Image to speech using camera (ITSC) using (option2)
3) Gesture control using (option3)

Dept. Of CSE, SJBIT 2024 - 25 Page 45


AI Assisted Eye For The Blind

4) Speech to text(STT) using (option4)

EXPECTED ACTUAL
TEST CASES RESULT
OUTCOME OUTCOME
Text to Speech Audio for Blind Audio for Blind Passed

Audio from image for Audio from image for


Text to Speech using Passed
camera blind blind

Gesture to text for Gesture to text for


Passed
Gesture to Text dumb dumb

Speech to Text Text for Deaf Text for Deaf Passed

Table 7.2 System Testing

Dept. Of CSE, SJBIT 2024 - 25 Page 46


Chapter 8
RESULTS

Fig 8.1 Final Connection

The Final result is the Raspberry Pi 4 connected to a USB storage device for OS and data,
with wired headphones providing audio output. An camera module is also included for the
object detection part of the project.

Raspberry Pi 4: The Raspberry Pi 4 Model B is used for processing and running programs. It’s
connected to a USB storage device, likely containing the operating system (OS) or additional
data storage for the project.

Headphones: A pair of wired headphones with a damaged earcup, connected to the Raspberry
Pi 4 via the 3.5mm audio jack, providing audio output for the program or multimedia experience.

Camera Module :The Camera is a small, versatile module that connects to Raspberry Pi
boards via the CSI interface. It supports high-quality imaging, video recording, and AI-based
applications. Popular for DIY projects, it’s used in robotics, surveillance, and photography.

Storage Device: The USB storage device connected to the Raspberry Pi holds the operating
system (OS) and potentially other essential data for running the Raspberry Pi, completing the
setup for various projects and tasks.
AI Assisted Eye For The Blind

Fig 8.2 User of the project

The image shows a person demonstrating the project designed to aid the visually impaired. The
setup includes a Raspberry Pi connected to various components, such as a camera mounted on the
user’s glasses and headphones for audio feedback. This innovative system processes visual data
and provides auditory descriptions to assist the user in navigating their environment effectively.

Dept. Of CSE, SJBIT 2024 - 25 Page 48


CONCLUSION
The proposed assistive device aims to bridge the communication gap between the differently-
abled and the general population by integrating advanced technologies such as text-to-speech
(TTS), speech-to-text (STT), and gesture recognition. This device is designed to empower visually
impaired, vocally impaired, and hearing-impaired individuals to communicate effectively and
lead independent lives. By leveraging IoT technologies, such as sensors, cameras, and actuators,
the system provides a holistic solution that is both adaptive and responsive to user needs.

The device’s modular design ensures that it caters to diverse requirements, whether it involves
converting text into voice for visually impaired users or translating gestures into text and speech
for vocally impaired individuals. Moreover, the speech-to-text feature enables the hearing-impaired
to comprehend spoken words easily. With a focus on portability, affordability, and simplicity, the
system addresses key challenges while offering a scalable platform for future enhancements.

This project serves as a foundation for developing more sophisticated assistive technologies. It
has the potential to evolve into a language-independent and real-time communication system,
incorporating advanced AI models for improved gesture and voice recognition. By prioritizing
inclusivity, the assistive device paves the way for a standard lifestyle for the differently-abled,
ensuring they are no longer hindered by communication barriers.
FUTURE ENHANCEMENTS
1. Integration with Mobile Devices: Incorporate the system into a smartphone app for enhanced
portability and convenience.
2. Multi-Language Support: Expand the device’s capabilities to support more languages, making
it accessible to a global audience.
3. AI-Powered Gesture Recognition: Employ advanced AI models for real-time and accurate
recognition of complex gestures.
4. Improved Gesture Sensitivity: Enhance the sensitivity and precision of sensors to handle
intricate and dynamic gestures effectively.
5. Real-Time Translation: Include real-time translation features to bridge communication gaps
between individuals speaking different languages.
6. Cloud Connectivity: Utilize cloud-based systems for storing and analyzing data, enabling
continuous learning and updates to recognition models.
REFERENCES
[1] M. Delliraj, “Design of Smart e-Tongue for the Physically Challenged People,” 2013.
[2] Fernando Ramirez-Garibay, Cesar Millan Ollivarria, “MyVox—Device for Communication
Between People: Blind, Deaf, Deaf-Blind, and Unimpaired,” 2014.
[3] P. Kumari, S.R.N Reddy, “PiCam: IoT-Based Wireless Alert System for Deaf and Hard of
Hearing,” 2015.
[4] R. Suganya, T. Meeradevi, “Design of a Communication Aid for Physically Challenged,” 2015.
[5] L. Anusha, Y. Usha Devi, “Implementation of Gesture-Based Voice and Language Translator
for Dumb People,” 2016.
[6] Shraddha R. Ghorpade, Surendra K. Waghamare, “Full Duplex Communication System for
Deaf & Dumb People,” International Journal of Emerging Technology and Advanced
Engineering (IJETAE), Vol. 5, Issue 5, 2015.
[7]Chucai Yi, Yingli Tian, Aries Arditi, “Portable Camera-Based Assistive Text and Product
Label Reading from Hand-Held Objects for Blind Persons,” IEEE Transactions on Mechatronics,
2013.
[8] Vasanthi G., Ramesh Babu Y., “Vision-Based Assistive System for Label Detection with
Voice Output,” 2014.
[9] A. Singh, P. Patel, “Intelligent Gesture Recognition System for Deaf and Dumb Individuals,”
2018.
[10] J. Brown, L. Chen, “Voice-Assisted Smart Glasses for the Blind,” 2019.
[11] S. Kumar, V. Gupta, “Sign Language Translation via Neural Networks,” 2020.
[12] M. Lopez, A. Torres, “IoT-Enabled Alert System for Hearing Impaired Individuals,” 2021.
[13] K. Nakamura, H. Singh, “Low-Cost Braille e-Reader for the Visually Impaired,” 2022.
[14] K. Naveen Kumar, P. Surendranath, K. Shekar, “Assistive Device for Blind, Deaf and Dumb
People Using Raspberry Pi,” 2017.
[15] “OpenCV: Face Detection Tutorial,” [Online]. Available: https://docs.opencv.org/3.1.0/d7/d8b/
tutorial_py_face_detection.html.
[16] “Introduction to Tesseract OCR,” [Online]. Available: https://opensource.google.com.
[17] “Python Programming for Beginners,” [Online]. Available: https://
www.pythonforbeginners.com.
[18] “Xming: Free X Server for Microsoft Windows,” [Online]. Available: https://
www.macs.hw.ac.uk.
[19] M. Shankar, “IoT Applications in Assistive Technology: A Review,” Journal of Internet
Technologies, Vol. 12, 2020.
[20] J. H. Lee, “Adaptive Speech Recognition for Hearing Impaired Devices,” Proceedings of
AICT 2019.
[21] S. D. Roy, “Integration of Cloud-Based Assistive Technologies,” International Journal of
Smart Systems, 2021.
[22] M. K. Das, “Portable Communication Systems for Differently Abled,” Sensors and Actuators
B: Chemical, 2020.

You might also like