Chapter - 1 11 1
Chapter - 1 11 1
BY
in
Telecommunications
2022
i
DECLARATION
I declare that this project “automated water control and monitoring systems is an
original work done by me under the supervision of Mr NCUBE, Telone Centre for
Learning
PROJECT SUPERVISOR:
<<NAME>> MR NCUBE
<SIGNATURE>>
……………………………………………………………………………
STUDENT:
<<NAME>> FADZAI NYONI
<SIGNATURE>>……………………………………………………………………
…………
ii
DEDICATION
This project is dedicated first to “The Almighty God” for giving me the strength to
complete this work. Secondly my friends and family for their support and
encouragement during my studies
iii
ACKNOWLEDGEMENT
I would like to thank my supervisor for helping me throughout this project. My
colleagues, my friends they stood by me through all the difficulties I encountered
giving me sympathy not to quit. I would like to thank my parents for the financial
support and also encouraging me to push harder. I would also like to thank the
Almighty for guiding me through, no sickness, no accidents, he was there for me.
iv
ABSTRACT
Deafness and hearing loss is the condition of incapability to hear things, either totally
or partially. According to the World Health Organization, 360 million people
worldwide (Over 5% of the world’s population) have disabling hearing loss, and 32
million are children (“World Health Organization,” 2015). The proposed system
comprises six blocks. The microcontroller acts as the heart of the system. The usual
method of communication for a normal person is speaking and texting. So, text input
is required in our system when a normal person starts a conversation with a deaf and
mute person. The normal person uses a mobile application to select a text he or she
wants to share with the deaf or mute person. The text is sent to the microcontroller
using a Bluetooth module. The message is displayed on the LCD, and a buzzer will be
turned on to notify the deaf or mute person that there is a message sent. This project
provides aid to those with special needs and will help them express their ideas
and thoughts.
v
TABLE OF CONTENTS
DECLARATION ......................................................................................................... ii
ACKNOWLEDGEMENT .......................................................................................... iv
ABSTRACT ................................................................................................................. v
Hardware design......................................................................................................... 15
vi
4.0 Introduction: The design procedure ................................................................. 18
REFERENCES........................................................................................................... 34
APPENDIX A ............................................................................................................ 36
vii
LIST OF TABLES
Table 4.3: Software specifications ............................................................................. 25
Table 5.1: Obtained results ........................................................................................ 26
viii
LIST OF FIGURES
Figure3.1: System block diagram .............................................................................. 15
Figure 4.1: PCB layout ............................................................................................... 19
Figure 4.2: 3D view of PCB layout ............................................................................ 19
Figure 4.3: stages in system testing ........................................................................... 20
Figure 4.4: Snippet of the program ............................................................................ 23
Figure 4.5: Defect testing cycle ................................................................................. 24
Figure 5.2: LCD performance .................................................................................... 30
LIST OF ABBREVIATIONS
I2C Inter-Integrated Circuit
mA Milliamps
ix
ms Milliseconds
V Voltages
mm Millimeter
LCD Liquid crystal display
DC Direct current
AC Alternating current
GHz Gigahertz
PCB Printed circuit board
PIC Peripheral interface controller
IoT Internet of Things
x
CHAPTER 1: INTRODUCTION
1.1 Introduction
Deafness and hearing loss is the condition of incapability to hear things, either totally
or partially. According to the World Health Organization, 360 million people
worldwide (Over 5% of the world’s population) have disabling hearing loss, and 32
million are children (“World Health Organization,” 2015). According to the General
Census of Population and Housing, it is the second-largest proportion in the
distribution of people with disabilities. Deafness deeply impacts the quality of life for
deaf individuals and their community. Some people think that the intelligence of deaf
and dumb people is less than normal people, but what we would like to express is that
this idea is not true. Deaf and dumb people have sharp intelligence that makes them
equal to normal people. Hearing disabilities differ from other disabilities due to the
presence of another language that compensates for verbal or oral language, known as
sign language. We can define sign language as the language used by deaf and dumb
people to communicate with each other and with other people. Despite the existence
of another language that compensates the verbal language, communication between
disabled people and normal people is still difficult, even with the existence of sign
language. This is because of the misconception which is common among people. The
proposed system uses an android application to connect people with special needs and
others. This paper aims to design a simple embedded system-based communication
device for deaf and dumb people. Here two major problems are taken into
consideration. The first is deaf and dumb people communicating with a normal person,
and the second is communication between deaf and dumb people.
1.2 Background
1
people. The only means of communication available to deaf and dumb people is the
use of “Sign Language.” Using sign language, they are limited to their world. This
limitation prevents them from interacting with the outer world to share their feelings,
creative ideas and Potential. Very few people who are not themselves deaf and dumb
ever learn to Sign language. This limitation increases the isolation of deaf and dumb
people from the common society. Technology is one way to remove this hindrance and
benefit these people.
Deaf and mute people find it very difficult to communicate in new environments.
Furthermore, the participants (deaf, mute and normal) must undergo some training
sessions to learn the sign language. This practice is common among almost all deaf
and mute, but normal people do not usually tend to learn sign language. Therefore,
deaf and mute faces some challenges to talk to someone who does not know sign
language.
1.4 Objectives:
1. To interface LCD, buzzer, Bluetooth module and an android application with PIC
microcontrollers.
2. To develop a mobile application to facilitate communication between the deaf/
mute and those who can hear or speak.
3. To be familiar with PIC microcontroller programming for communication
applications.
1.6 Functionality
The proposed system comprises six blocks. The microcontroller acts as the heart of
the system. The usual method of communication for a normal person is speaking and
texting. So, text input is required in our system when a normal person starts a
conversation with a deaf and mute person. The normal person uses a mobile
application to select a text he or she wants to share with the deaf or mute person. The
text is sent to the microcontroller using a Bluetooth module. The message is displayed
on the LCD, and a buzzer will be turned on to notify the deaf or mute person that there
is a message sent. This project provides aid to those with special needs and will
help them express their ideas and thoughts.
2
1.7 Research Hypothesis
1.8 Justification
1.9 Conclusion
The first chapter has identified that the system will greatly be of paramount importance
in helping the system’s users. This chapter overviews the overall topic and the report’s
purpose. It includes a background of the study, a statement of the problem, objectives,
and the scope of the investigation. The background of the study consists of relevant
references and a brief theoretical background of the key concepts of the project.
3
CHAPTER 2: LITERATURE REVIEW
2.0 Introduction
This chapter deals with the basic theories that our thesis fells on including the
introduction, operation, types and key components of boiler and the literature survey
on temperature Control.
Literature review
Sensor-Based Technologies
The sensor-based assisted technologies mainly use external devices like handheld or
finger-worn devices to detect various gestures. The authors developed a finger-worn
device called the Magic ring that can translate a predefined gesture into some
information for communication. The research is based on a micro-controller device
and some flex sensors to detect various finger movements and show output as a text
or speech. A hand gesture recognition glove was developed to translate the sign
language to some texts and show it on a portable device. The sensor-based device is
quite accurate in detecting the gestures, but they can be costly and cumbersome to use
for communication purposes.
Vision-Based Technologies
The vision-based assistive technologies for deaf and mute users mainly use various
image processing techniques to detect sign languages or gestures from input images or
videos. There have been few types of research to detect sign language using the
Microsoft Kinect camera and interpret the sign language into some information. A
communication system for deaf and mute was developed to recognize the gesture from
the input image. The authors in literature [15] developed a full-duplex communication
system where hand gesture is processed from input video and converted to text or
speech. Also, the speech of a normal person has a corresponding gesture. In the paper,
[16] a device called Digital Dactylology Converter (DDC) was developed, which
processes the sign languages from input images and converts them to voice signals and
text messages. The authors in [17] developed a multi-modality-based Arabic sign
language detection system using a huge image data set. The vision-based assistive
technologies are very good for detecting sign language but are complex for
communication purposes.
4
Smartphone-Based Technologies
The popularity and usability of smart devices like smartphones and smartwatches have
led to the development of many assisted technologies. El-Gayyar in [18] developed a
mobile application supported by cloud computing that can translate Egyptian Arabic
speech into Visualized 3D Avatar. The paper [19] is also based on speech recognition
and then visualized using an avatar but also supports two-way communication.
Research [20] developed a hearing aid using a smartphone to help hearing-impaired
people. In literature [21], real-time emergency assistance called iHelp was developed
where information is sent through GPS and text messages. The research [22] is based
on a smartwatch-based assistive device that triggers some vibration in the user when
sound is detected. Smart device-based assistive solutions have much more usability
than other technologies. Therefore, we used the smartphone as the communication
medium for our proposed system.
Hand Gesture Recognition for Sign Language Recognition
The various approaches that have been used to build an interface for differently abled
people are discussed in this section. Gunasekaran et all [3] proposes a system that
integrates a sensing unit, a processing unit, a voice storage unit and a wireless
communication unit. Through integrating a flux sensor and an APR9600 with
PIC16F877A we are able to build an interactive system. Pratibha Pandey et all [4] have
modelled a gesture recognition system that does feature detection and feature
extraction of hand gesture with the help of SURF algorithm using image processing.
Similarly Shweta S et all[6] has modelled a similar recognition system using a Atmega
controller (8&16 or 168 or 328 or 2560),ARM processor(LPC2148), PIC
controller,8051 instead of a raspberry pi , and the output is displayed in a lcd or on a
mobile using a wifi module or bluetooth.
An Assistive Device For Deaf And Dumb People
Language is the method of human communication, either spoken or written, consisting
of use of words in a structured and conventional way. In a situation involving
communication between two different person from different region and language it is
much more difficult to convey their ideas and views. Hence an involvement of a third
person may be a translator is required. Such a scenario exist in communication between
a normal person and a person with hearing and speaking difficulties. To overcome this
problem, we introduce a hand Glove. Our model is a acts as an interpreter which
translates sign language to text and then into voice[5]. This model has a sensor
5
embedded glove that has the capability of converting the hand sign language used by
hearing-impaired into alphanumeric characters and which will also be converted to a
voice output. Communication among participants can be done effectively if all are
bounded by a common language. In addition, it ensures that hearing impaired people
are able to obtain the best possible education and services within the community. Thus
we aim to design a device in the form of a wearable hand glove which recognizes the
Sign Language and converts it into text on any hand-held device and finally gives a
voice output.
Embedded Based Hand Talk Assisting System for Deaf and Dumb
Our proposal will help the deaf and dumb people who are unable to communicate, or
having difficulties in communication. A setup data glove is equipped with five flex
sensors, each of the flex sensors is meant to be fixed on each of the finger of the hand
glove for the monitoring and sensing of static movements of the fingers of the hand.
Whatever the person wants to communicate is activated by two ways either by hand
gesture or by keypad in the device. This input is text is processed using a
microcontroller. Further, the frequently spoken words can be stored in memory of
APR9600 voice chip and can be easily retrieved by using hotkeys. The output from
the LCD can be read by the dumb people and Speaker can be heard by the deaf people.
This device helps in communication if attached to both the person involved in the
communication who may be deaf, dumb, and Normal person.
Language and Communication: The Deaf Language
Deafness can be characterized on three focuses which are the level of deafness, when
it happened and the body part which influences the capacity to listen. Prelingual
deafness happens before the child acquires the language consistently while post lingual
deafness happens sometime later after the child has secured the language (Nadoushan
16-17). In spite of the fact that the period of life when the deafness occurred is
essential, we cannot disregard the level of deafness as indicated by the sound volume
that is measured in the decibels. There are five classes into which the deafness is
divided and rely on upon the detection of the sound. The first classification, described
as mild, implies that the lowest level of sound adults can hear is from the range of 25-
45 dB and children from the range of 20-40 dB. For the second classification called
moderate, the lowest level of sound is from the range of 41-55 dB. For the third,
moderately severe, just sounds louder than 56-70 dB can be heard and for the fourth,
severe, the sound must be even more than 71-90 dB. The last classification called
6
profound incorporates people with the difficulties to hear the sounds under 90 dB. The
body part is the one that negatively influences the capacity to determine whether the
individual experiences conductive or perceptive hearing loss. In other words, the
difference between in the conductive and perceptive hearing loss is reliant on the body
part where the dysfunction occurred (Mole, McColl and Vale 11).
Conductive hearing loss implies that the dysfunction occurred in any part of
hearing organs, for instance in the middle ear, and usually is connected to the
volume of the sound.
Perceptive hearing loss implies to a dysfunction in the brain where the sound
should be interpreted and is usually linked to the regularity of the sound.
The types of deafness stated above influence the type of communication of the deaf
people.
Communication is the process of providing and receiving the information. The chart
show the relationships in the act of communication.
ADDRESSER_____________MESSAGE_________________ADDRESSEE
CODE 14
The addresser gives the message to the recipient utilizing any communication mode
which is in common code (Dontcheva-Navratilova 13). In Grammatical Structures in
English as according to Dontcheva-Navratilova, argues that the knowledge of the
common code is important for the ability of the participants to encode and decode the
message (14). Therefore, this implies that the code whether it is spoken, written or
gestured, it must be comprehended by both addresser and recipient to have the ability
to communicate with one another.
From a psychological perspective, we can recognize the communication into verbal
and nonverbal communication. According to the data shown on the chat above, verbal
and nonverbal communication follow the code or communication mode. For instance,
the nonverbal communication includes sign language and writing. Verbal
communication is based on the spoken language and among others incorporates speech
and tone. In addition, hearing individuals, deaf people use numerous communication
strategies to achieve communication. Among the CS there are:
spoken mother tongue
written mother tongue
lip reading
7
finger spelling
sign language
drawing
The preference of the strategy relies upon the deaf participant and the time when the
deafness occurred. The deaf individual who lost their hearing sense after acquiring a
language or one 15
who has a mild to moderate hearing loss will prefer the spoken language as they would
have developed communication skills just like the hearing people. Furthermore, a child
who is deaf and lives in a hearing family will have minimal opportunity to gain
systematically sign or spoken language and his or her communication will be a mixture
of numerous CS. As suggested by Nadoushan, "pre-lingual deaf children who are born
in hearing families often encounter some troubles in language acquisition......the
degree of exposure they get is not as rich as that which deaf children who are born to
deaf parents or children who are born to hearing parents get" (16-17).These children
as Nadoushan purports "remain language deprived up until their school experience
which is most likely their first involvement with a competent and naturalistic language
model" (17). "This early language deprivation clarifies the difficult insights that 90
percent of deaf children born into homes with just hearing parents experience delays
in language acquisition compared to hearing children in hearing families and deaf
children in deaf families" (qtd. in Briggle 69).
2.4.1 The deaf language
Chimedza (2007) defines Sign language, (SL) as a visual language that uses a system
of non-manual, facial and body movements as a means of communication. The
language “may refer either to the human ability for acquiring and using complex
systems of communication or to a specific instance of such a system of complex
communication”. Sign language encompasses its grammar and syntax which is
developed just as any other spoken language .People should take point on that Sign
languages are not collections of gestures or do they reflect their spoken language
equivalents, but that they are fully functioning languages in their own right. Sign
languages are not grammatically interrelated in any way to their national spoken
language,. Those are born deaf are likely not to understand English or Shona as they
have never heard it spoken. This is different from those who lost 16
8
their hearing aid after they had spoken it, as they can read, write and understand. Every
single country has its own spoken as well as sign language and also dialects can be
found in both types of languages.
2.5 Communicating with the Deaf
The profound hearing loss is the last stage of hearing disability as the person is born
without the ability to learn a language. The majority resort to sign language and are
assisted with lip reading techniques and residual hearing. This type of hearing can be
defined as pre lingual deafness. The Victorian Deaf Society has managed to give
suggestions that can assist any hearing person to communicate with a deaf person.
First seek the attention of the deaf person by either waving a hand or gently
touching on the arm.
The person should know the area or subject of what the topic is about .Make
sure you highlight when moving to the next subject.
Speak evenly and not rushing so that the person can manage to lip read. Do not
raise your voice as this will not change anything.
Expose body language and facial expressions.
Avoid a noisy place as it makes speech reading and residual hearing difficult.
Make use of the environmental visual cues to convey the message like signage,
directions, hand-outs, notes and captions of videos.
Take into considerations the distance of the interlocutors. This will assist in
listening and lip reading.
If the person is having difficulties in comprehending the message, try to
rephrase than repeating what you have said before.
Resort to writing notes in any case of a noisy environment,
When doubting that the listener has comprehended well the message asks for
suggestions to improve.
Simulation Softwares
The student had a lot of options to choose from for an effective and open-source
simulation software with easy configurations and essential libraries in order to
simulate the workings of the project before implementing the real prototype. A good
simulator should allow one to digitally recreate several aspects of the process, which
9
includes designing of your own circuits and components, creation programs (sketches)
or importation from Arduino IDE and simulate interaction between the Arduino, IO
interfaces, and program. The student weighed the following options for simulation
purposes.
Proteus - The Proteus Design Suite is a proprietary software tools
suite used primarily for electronic design automation. The software
is used mainly by electronic design engineers and technicians to create
schematics and electronic prints for manufacturing printed circuit boards.
Proteus is used to simulate, design and drawing of electronic circuits. It was
invented by the Labcenter electronic.
Tinkercad - is a free, online service from Autodesk that began in 2017 and is
probably the most user-friendly Arduino simulator. You can easily design your
own circuits, create a program in block or text format and then debug it. The
simulation of Arduino boards and IO interfaces and the interaction with the
code works like a charm, and the code can be downloaded and shared with
other makers. There are limits, of course. Tinkercad doesn’t allow you to create
or add your own parts and components, and there are only six Arduino libraries,
which you can’t add to.
10
Arduino uses its own programming language, which is similar to C++. However, it's
possible to use Arduino with Python or another high-level programming language. In
fact, platforms like Arduino work well with Python, especially for applications that
require integration with sensors and other physical devices.
Hardware
The Atmega’s purpose is to serve as the processing unit of the system that will interface
the other components. The Atmega is a high computational performance
microcontroller with the addition of high-endurance, flash program memory. On top
of these features, introduces design enhancements that make these microcontrollers a
logical choice for this design:
High Performance RISC CPU
Extreme Low-power Management with nanowatt XLP™
100nA, Typical Sleep Mode
500nA, Typical Watchdog Timer
500nA at Typical 32kHz Timer1 Oscillator
Flexible Oscillator Structure
Precision 16MHz Internal Oscillator Block
Four Crystal Modes up to 64MHz
Two External Clock Modes up to 64MHz
4X Phase Lock Loop (PLL)
Secondary Oscillator Using Timer1 at 32kHz
Fail-safe Clock Monitor, Allows for Safe Shutdown If Peripheral Clock Stops
Two-speed Oscillator Start-up
Full 5.5V Operation
Self-reprogrammable Under Software Control
Power-on Reset (POR), Power-up Timer (PWRT) and Oscillator Start-up
Timer (OST)
Programmable Brown-out Reset (BOR)
Extended Watchdog Timer (WDT) with On-chip Oscillator and Software
Enable
Programmable Code Protection
In-circuit Serial Programming™ (ICSP™) via Two Pins
LCD
11
In this project, l used a 16*2 LCD, which is an alphanumeric display module. The
LCD is programmable using the Register Select, Register Write, Enable, and Data
pins. The LCD can work in 8-bit mode or 4-bit mode to save pins. In this project, l
have used 4-bit mode, and the device can display 192 different real-world characters.
The LCD is connected directly to the port of the microcontroller, with the VDD pin
connected to power. The device makes use of RS and enables pins when writing data
to the screen. Data is transferred by use of D0-D7 data pins in 8-bit mode and D0-D3
in 4-bit mode. I choose this device because it is easy to program and interface with the
microcontroller. The operating voltage of the LCD is 4.7V to 5.3V, and its current
consumption is 1mA without a backlight.
2 Vdd (+5 Volt) Powers the LCD with +5V (4.7V – 5.3V)
12
7 Data Pin 0 Data pins 0 to 7 form an 8-bit data line. They can be
connected to a microcontroller to send 8-bit data. These
LCDs can also operate on the 4-bit mode; in such cases,
Data pin 4,5,6, and 7 will be left free.
13
CHAPTER 3: METHODOLOGY
3.0 Introduction
This chapter identifies the system's interfaces, components, modules, architecture, and
all the needs required for the designer to develop a functional system, thus meeting its
goals and aims mentioned in chapter 1. In addition, the chapter includes the
identification of data and the input processes of the system in detail. This enables
pointing out that the designed system will meet the needs and expectations of operators
at distribution centers. This chapter ensures that the proposed system will, in the future,
allow alterations or modifications to continue to be made to ensure the system
continues to be used despite dynamic changes in innovation. This creates a difference
in designing a system that proficiently and effectively reacts to the end user's
prerequisites and produces the specified output.
The methodology used in the system implementation was the prototyping
methodology. This methodology involves several steps that make it efficient:
1. Identification of the initial requirements
This is where the definition of the project's main goal was done. Here research was
done on potential obstacles and the best way to overcome them. The main goal was to
ease the patrol monitoring process in this case.
2. Designing
This is where designs of the systems will be proposed, and the most effective one will
be selected. Designs of this proposed system will be derived from the conceptual
framework.
3. Prototyping
This is where a mock-up of the system will be developed.
4. Customer evaluation
Where the stakeholders will be shown and given a chance to interact with the mock-
up. Feedback is expected to know the way forward.
5. Review and feedback
This is where necessary changes and updates are made based on the feedback obtained
from the stakeholders.
6. Redesign
Where designing of the improved system will be done.
14
3.1 Hardware design
The proposed system comprises six blocks. The microcontroller acts as the heart of
the system. The usual method of communication for a normal person is speaking and
texting. So, text input is required in our system when a normal person starts a
conversation with a deaf and mute person. The normal person uses a mobile
application to select a text he or she wants to share with the deaf or mute person. The
text is sent to the microcontroller using a Bluetooth module. The message is displayed
on the LCD, and a buzzer will be turned on to notify the deaf or mute person that there
is a message sent. This project provides aid to those with special needs and will
help them express their ideas and thoughts.
15
modifications were easy to execute, saving a lot of time and effort compared to the
manual drawing.
Etching
This process was done where the unwanted copper on the PCB was removed using an
etching solution. Ironing up the PCB against gloss paper was done to transfer ink onto
the board, and covering copper tracks was essential. The printed PCB is dipped into
warm etching solution a constantly checked to see changes. Great care should be taken
to avoid wearing out tracks and unwanted copper remaining on the board. Ironing was
done using domestic irons. The distribution of heat equally across the board was an
important activity.
Drilling
Drilling was done using a stand-alone drilling machine and a 1mm drill bit. During
this stage, great caution was taken to drill holes as they appeared on the board. It was
a bit laborious to fit some of the components onto the board, for example, the PIC
Microcontroller.
Population of components
This was a process of sticking the components to the drilled board. After the holes
drilling stage, the components were mounted appropriately on the PCB board. The
amount of solder used was kept at a minimum while ensuring enough to keep the
connection between the component and the track at its best. Great caution was taken
to avoid bridging, short circuit, or open circuit problems.
System testing
System testing is the significant quality control measure used during software
development. The goal of testing is to uncover requirements, design, and coding errors
in the program. Testing determines whether the system appears to be working
according to the specifications. It is the phase where the designer tries to break the
system and test it with real case scenarios. Testing methodologies and test goals differ
in framework and software domains. Embedded software development employments
particular compilers and improves computer programs that provide means for
debugging. The designer mainly focuses on self-test and functional verification after
developing the system in testing the hardware. For hardware, specific techniques are
used to test the correct behaviour of the designed circuit. During this stage, errors may
arise, leading to redesigning the system.
Packaging and casing
16
After the mounting and soldering of various components, the project was packaged on
a plastic casing with all components projecting from the top of the casing.
During the prototype design the microcontroller was kept safe in a place where there
is no static electricity, and it was not subjected to over current as this result in damage
of the microcontroller. when etching the PCB board all the tracks were completely
covered with permanent maker before immersing the board in etching solution to avoid
tracks being dissolved.
3.5 Recommendations
17
CHAPTER 4: SOFTWARE AND HARDWARE IMPLEMENTATION
The system is designed in two sections, hardware, and software. In the hardware
section, the overall design was broken down into a functional block diagram, with each
block representing a portion of the designed system that performs a particular task.
18
Figure 4.1: PCB layout
19
In the final stage the board was put into an etching solution, FECl 3. A reduction reaction
happens as the copper board is immersed into the ferric-chloride solution. The board after
the etching process is rinsed with running water to clean off the dangerous etching mixture.
Afterwards board is dried and tonner cleaned with some acetone. Finally, holes are drilled
onto the board and the board waits for component population
Packaging and casing
After the mounting and soldering of various components, the project was packaged on
a plastic casing with all components projecting from the top of the casing.
4.1.2 Testing categories and results
System testing of the designed system involves different categories. This was done to
ensure that the designed system meets the objectives mentioned in chapter 1 and the
end-user requirements. The tests were conducted by identifying the functional and
system requirement.
Unit testing
The modules were tested separately to see if they performed the intended tasks, and
platform-related pieces such as communication protocols interrupt. A unit test was
performed on LCD and buzzer by interfacing it to the Atmega microcontroller and
running a program to display characters and sound a buzzer. The sensor performed as
expected by the developer by completing the data acquisition process.
20
Integration Testing
The researcher undertook the integration of different modules of the Biometric police
clearance system after the modules had been coded and tested. Integrating other
modules was done in a planned manner during integration testing, and it is usually
carried out incrementally over several steps. The previously planned modules were
added to the partially integrated system during each step, and the resultant system was
tested. Finally, after all the modules were integrated and tested, the system. The screen
below shows after the user successfully calls this module from the main menu so that
the user can start entering data.
System testing
After successful integration testing, the system testing was carried out. System testing
ensures that the developed system conforms to the requirements laid out in the chapter
3. The system testing process was carried out in two different forms which are;
Alpha testing: - This system testing was done by the researcher (developer).
Beta testing: - A friendly set of clients performed this system testing.
System testing was also carried out in a planned manner. The system test plan
identified all testing-related activities to be performed, specified the testing schedule,
and allocated the resources.
Acceptance testing
The designer conducted acceptance testing for acceptability purposes. After
completing the system testing, the user is expected to take over the system for
acceptance testing. The objective of conducting this test was to test the system using
live scenarios. The end-users participated with significant ownership and showed
passion for accepting or suggesting improvements during the acceptance test for the
system to be successfully implemented. Beta method was used to conduct the
acceptance test.
Beta testing, whereby the designer uses actual data obtained from the users of
the system rather than manipulated data created by the designer as data to test
the system
21
4.2 Software implementation
The code for the microcontroller was written in C language using the Arduino IDE for
PIC software. The C language is very flexible when it comes to programming
microcontrollers Arduino IDE adds to that flexibly by providing an array of libraries
specialized for PIC hardware modules and communication. It has built-in libraries and
routines that make the development of applications easier and faster. It has many
features, including the code Explorer, which allows users to monitor program
structure, variables, and functions. There is also a real-time debugging tool that can be
used to monitor program execution at a hardware level.
The code used for this project is shown in the appendices section. In doing this, the
algorithm for the source code is written following the instruction set of the Atmega
microcontroller. A snapshot of the program is also shown as follows:
22
Figure 4.4: Snippet of the program
After writing the program for the system, the developer performed the following test
to debug the program:
Defective testing
Defective testing was conducted to find an area where the system does not meet its
specifications through simulations. Tests are conducted to retrieve available defects in
the system. Debugging of those defects was then undertaken to remove them. Testing
the system to identify defects was performed manually by tracking the codes, enabling
the identification of errors though it takes a lot of time. After the developer had
corrected the defects, the system was retested through regression testing. Regression
testing was done to see if the changes made did not introduce new errors into the
system. The process was repeated until the developer was satisfied with no errors. The
developer conducted defective testing following the procedures in the diagram below:
23
Figure 4.5: Defect testing cycle
4.2.2 Mobile application frontend
The mobile application frontend is the part that users interact with, i.e., everything that
users see when they are navigating around the website. The developer designed a
graphical user interface that is user-friendly and easy to understand for the police
officer.
Android Studio
Microsoft Co-operation Inc developed Android Studio which is a suite of applications
that give developers a platform to create enterprise windows applications. Android
Studio Ultimate 2012 edition was used to develop the front end of the application
which will interface database through various controls. The researcher chose this
development tool due to its numerous advantages. It will enable the writer to write
code quicker and easier due to its production enhancing features such as auto-
completion, code refraction, method look up and many more. It also has pre-built
blocks for managed and native code which will help the researcher reduce code.
24
4.2.3 Software specifications
Software specifications refer to the different software that will be required during the
software development. The table below shows the purposes of each software required
for the development of the proposed system.
Table 4.1: Software specifications
Purpose software
Operating system Android version 8.0 and above
Documentation Microsoft Office
Database creation MySQL
Writing code Android Studio
25
CHAPTER 5: RESULTS AND EVALUATION
5.0 Introduction
This chapter discusses the results which were obtained when different levels of system
testing were performed.
ACTION RESULT
When the system is initialized The Bluetooth module pair with the mobile
phone
This section includes a discussion of the results and a conclusion as well as the
behaviour of the system from hardware and software.
5.2.1 Liquid Crystal Display performance
The device displayed the correct information on the screen, and adjusting of contrast
could be done with ease. It worked as expected, displaying instructions for the user
and all the outputs. It prompted the user to enter the current value to act as the limit. It
is recommended that an I2C LCD be used for future work for less wire connection.
26
27
28
29
Figure 5.1: LCD performance
Power constraints
30
Frequent power cuts from the national grid hindered periodic testing and debugging of
the system due to the unavailability of power to power up the system. However, the
project has a backup power supply from Lipo Batteries, which is very more reliable.
5.3 Conclusion
The Automatic Water Pump system has been successfully developed. The researcher
managed to meet the system objectives of having deployment done automatically than
having a tedious and inaccurate paper work process. Finally, the systems testing using
all methods and all users of the software have been trained fully about the new system.
31
CHAPTER 6: CONCLUSION AND RECOMMENDATIONS
6.0 Introduction
This paper discussed the development of an assistive system for deaf and dump. The
proposed system overcomes the real time difficulties faced by hearing- impaired
people and helps them to improve their lifestyle. System efficiency is improved with
32
the help of ATMEGA also integrated with HC-05 Bluetooth Module that helps in long
distance communication. Compared with existing system it is possible to carry to any
place as it is light weight. The used ATMEGA as the heart of the system, LCD and
mobile application to add visual aspect to the deaf person. All objectives mentioned in
chapter 1 were all met.
The designed system can further be developed by adding a feature of recognising sign
language.
33
REFERENCES
[1] Hsu C W and Lin C J 2002 A comparison of methods for multiclass support
vector machinesIEEE Trans.on Neural Net.13 2 pp415-25
[2] Passerini A, Pontil M and Frasconi P 2004 New results on error correcting
output codes of kernel machinesIEEE Trans. on Neural Net. 15 1 pp45-54
[3] Esu O O, Lloyd S D, FlintJ A and WatsonS J 2014 Integration of low-cost
consumer electronics for in-situ condition monitoring of wind turbine
blades3rd Renewable Power Gen. Conf. (Naples) pp1-6
[4] Gungor V C, Lu B and Hancke G P 2010 Opportunities and challenges of
wireless sensor networks in smart grid IEEE Trans. on Industrial Electronics57
10 pp 3557-64
[5] Wang T, He Y, Li B and Shi T 2018 Transformer fault diagnosis using self-
powered RFID sensor and deep learning approachIEEE Sens. J.18 15 pp6399-
411
[6] Ravi S, Mezhuyev V, Annapoorani K I and Sukumar P 2016 Design and
implementation of a microcontroller based buck boost converter as a smooth
starter for permanent magnet motor Indonesian J. of Electrical Eng. and Comp.
Sci. 1 3 pp 566-74
[7] Zou S, Lin J, Wang H, Lv H and Feng G 2019 An effective method for service
components selection based on micro-canonical annealing considering
dependability assurance Frontiers of Comp.Sci. 132 pp264–79
[8] Ravi S and Balakrishnan P A 2010 Temperature response control of plastic
extrusion plant using Matlab/Simulink Int. J. of Recent Trends in Eng. and
Techol.3 4 pp 135-40
[9] Ravi S, Rajpriya G and Kumarakrishnan V 2015 Design and development of
microcontroller based selective harmonic elimination technique for three phase
voltage source inverter Int. J. of Applied Eng. Research 10 13 pp 11562-78.
[10] Halvorsen H P 2018 Programming with Arduino (Norway) ISSN 978-82-
691106
[11] Santos J C M, Patino O A and Ortiz S H C 2017 Influence of arduino on the
development of advanced microcontrollers courses IEEE revista
iberoamericana de tecnologias del aprendizaje 12 4 pp 208-217
34
[12] I Texas 2016 L293x Quadruple half-H drivers Texas instruments incorporated
(Texas) pp1-6
[13] KhannaA and Ranjan P 2015 Solar-powered android-based speed control of dc
motor via secure bluetooth 5thInt. Conf. on Comm. Sys. and Net. Technol.
(Gwalior)pp 1244-49
[14] Lapshina P D, Kurilova S P and Belitsky A A 2019 Development of an
arduino-based CO2 monitoring device IEEE Conf. of Russian Young
Researchers in Electrical and Electronics Eng. (EIConRus, Saint Petersburg
and Moscow) pp 595-97
[15] Kumar R and Khalkho A N 2016 Design and implementation of metal detector
using DTMF technology Int. Conf. on Signal Processing, Comm., Power and
Embedded Sys. (SCOPES, Paralakhemundi) pp 368-71
[16] A. Stonier, S. Murugesan, R. Samikannu, S. K. Venkatachary, S. Senthil
Kumar and P. Arumugam, "Power Quality Improvement in Solar Fed
Cascaded Multilevel Inverter With Output Voltage Regulation Techniques," in
IEEE Access, vol. 8, pp. 178360-178371, 2020, doi:
10.1109/ACCESS.2020.302778
35
APPENDIX A
#include <Wire.h>
#include <LiquidCrystal_I2C.h>
LiquidCrystal_I2C lcd(0x27,16,2);
#include <SoftwareSerial.h>
#define pump 12
int level1;
long duration1;
int distance1;
int level;
long duration;
int distance;
void setup() {
// put your setup code here, to run once:
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
pinMode(trigPin1, OUTPUT);
pinMode(echoPin1, INPUT);
36
pinMode(pump,OUTPUT);
digitalWrite(pump,HIGH);
Serial.begin(9600);
nodemcu.begin(9600);
void loop() {
// put your main code here, to run repeatedly:
getTankLevel();
delay(1000);
getWellLevel();
delay(1000);
nodemcu.println((String)level + "," + level1);
37
nodemcu.println((String)level + "," + level1);
delay(1000);
lcd.setCursor(11,0);
lcd.print(" ");
lcd.setCursor(11,1);
lcd.print(" ");
}
else{
digitalWrite(pump,HIGH);
}
delay(1000);
lcd.setCursor(11,0);
lcd.print(" ");
lcd.setCursor(11,1);
lcd.print(" ");
void getTankLevel(){
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance= duration*0.034/2;
38
level = map(distance,14,2,0,100);
lcd.setCursor(0,0);
lcd.print("Tank Level:");
lcd.setCursor(11,0);
lcd.print(level);
}
void getWellLevel(){
digitalWrite(trigPin1, LOW);
delayMicroseconds(2);
digitalWrite(trigPin1, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin1, LOW);
duration1 = pulseIn(echoPin1, HIGH);
distance1= duration1*0.034/2;
level1 = map(distance1,14,2,0,100);; //waiting for calibration
lcd.setCursor(0,1);
lcd.print("Well Level:");
lcd.setCursor(11,1);
lcd.print(level1);
39