0% found this document useful (0 votes)
241 views

Driver Drowsiness Detection System

A Driver Pattern Recognition System was developed, using concepts based on the concept of a nondisruptive machine. The machine uses a small monochrome safety camera that points directly to the driver's face and monitors the driver's eyes to detect fatigue.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
241 views

Driver Drowsiness Detection System

A Driver Pattern Recognition System was developed, using concepts based on the concept of a nondisruptive machine. The machine uses a small monochrome safety camera that points directly to the driver's face and monitors the driver's eyes to detect fatigue.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Volume 5, Issue 11, November – 2020 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Driver Drowsiness Detection System


Nikunj Mistry
UG Student, Department of Information Technology, Universal College of Engineering,
Kaman, Maharashtra, India

Abstract:- A Driver Pattern Recognition System was Initially, we agreed to use Matlab to find blinking patterns.
developed, using concepts based on the concept of a non- The process used to control geometric power levels [4], [5],
disruptive machine. The machine uses a small and [6]. The algorithm used was as follows. First we use a
monochrome safety camera that points directly to the webcam to capture a face photo. Pre-preparation is done
driver's face and monitors the driver's eyes to detect initially with a binarizing file [7]. The upper and lower
fatigue. In such a case when fatigue is detected, the extremities are felt to reduce the area where the eyes live. In
driver is alerted with a warning signal and if the driver facial cases, the center of the face was found and would be
is distracted he will also warn the driver to be careful. used as a reference when measuring left and right heads [9].
This report explains how the eyes can be found, and how The exact measurements of the facial area were determined
to determine if the eyes are open or closed. The advanced 1 from the bottom from the top of the nose. Significant
algorithm differs from any currently published variations have been used in the measurements to describe
documents which is the main objective of the project. the location of the eye. The horizontal scale did not shift
The device deals with finding facial edges using while the eyes were closed and used to detect blinking [7],
information obtained from the binary version of the [8], and [12]. Matlab however had some problems. Matlab's
image, which reduces the area where the eyes will be. processing power was extremely powerful. There were also
When the surface area is defined, the eyes are obtained some speed issues in real-time performance. Matlab could
by measuring the horizontal area. Recalling the only process 4-5 frames per second. There was also a low 9
knowledge that the circuits of the eyes on the face bring on the machine with low RAM. Because we all know that
about a great change in strength, The eyes are obtained the blink of an eye is a matter of milliseconds. The
by experiencing major changes in facial pressure. When movement of the driver's head can also be very fast.
the eyes are in a good position, measuring the distances Although the discovery was made possible by the Matlab
between the size changes in the eye area determines software developed by us, the result was very interesting.
whether the eyes are open or closed. The long distance is OpenCV came in this way. OpenCVV is an open source
associated with blindfolds. If the eyes are found closed computer library. This is optimized for computer
with five consecutive frames, the machine assumes the performance, with a strong emphasis on real-time
driver is asleep and sends an alarm. Also, the system can applications. It helps to develop vision applications quickly
detect when the eyes are not available and operate under and efficiently. OpenCV meets our low processing
appropriate lighting conditions. capabilities and high-speed data. In OpenCV, we used
Haartraining software to detect faces and eyes [5], [9], [11],
Keywords:- Binarisation, OpenCV, Detection Algorithm, and [14]. This produces a separation, given a set of straight
Noise Removal. and non-specific samples. The steps are as follows: Collect
face and eye data collection. This should be stored in a text
I. INTRODUCTION file identified in one or more folders. For segregation to
work effectively it requires high-level data [10]. Sample
In a large number of car accidents, driver fatigue and production system) (used to create a vector output format.
frustration are a major factor. The latest figures estimate that We can duplicate the training process using this text. It
1,200 deaths and 76,000 injuries are caused by fatigue- produces beautiful samples from images to a specified width
related injuries each year [3]. In most car accidents, driver and height before adjusting and enlarging the size. Image
fatigue is a major factor. The latest figures estimate that editing is done using magnification. group categories are a
1,200 deaths and 76,000 injuries are caused by fatigue- weak divider. Usually, these weak dividers. They contain
related injuries each year [3]. Improving the technology to one different determination drug, called stumps. At the
detect or prevent drowsiness in a wheelchair is a major training session, the decision-maker learns in its data about
challenge to the safety plan. Because of the danger posed by its classification decisions and learns its accuracy in the
drowsiness on the line, Strategies to combat their effects weight of the voting data. it is estimated that the data points
need to be developed. This project aims to develop a sleep in which errors are made are highly regarded [8] .The cycle
recognition tool. Emphasis will be on building a device that continues until the data collection error occurs in the vote e
can follow the driver's eyes open or closed in real time with the average of the decision trees falls under a certain
accuracy [1]. By tracking students, signs of driver fatigue threshold. This algorithm is effective when there is a large
are thought to appear early enough to prevent a car accident amount of training data. Face-to-face planning is required
Detection of fatigue requires assessing eye movements and for our project. So we used a learning curve to create our
blinking patterns in the sequence of facial images [2]. haarclassifier.xml files. [5], [10], [11], and [13].

IJISRT20NOV620 www.ijisrt.com 1056


Volume 5, Issue 11, November – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
II. OBJECTIVE driver. If there is too much ambient light, the effect of the
light source will decrease. A dark background and less
Improving the technology of monitoring or avoiding ambient light were used in the test area (in this case, the
drowsiness on the wheel is a major obstacle to accident ceiling light was physically higher and thus less light). This
prevention programs. Resistance mechanisms need to be setup is very realistic because there is no direct light inside a
developed because of the risk of drowsiness on the line. The car, and the background is relatively similar.
aim of this project is to improve the perception of
drowsiness. The focus will be on developing a device that 5. Using the absence of retinal reflexes as a way of
can accurately track the driver's eyes open or closed in real detecting when the eyes are closed
time. Following the student's lead, signs of driver fatigue are There are many strategies and procedures for eye
thought to appear early enough to prevent a car accident. tracking and procedures, as well as monitoring. Most of
Identification of fatigue requires a sequence of facial images them are somehow related to the eye features (eye drawings)
to monitor eye movements and blink patterns. within the driver's video image. The first objective of this
project was to use (only) the manifestation of a bone as a
III. LITERATURE SURVEY means of obtaining eyes on the face, and then to use the
absence of this expression as a means of detection when the
1. Techniques for Drowsy Driver Detection eyes are closed. It was then realized that this approach may
Existing drowsiness techniques for drivers can be not be the best way to keep an eye out for two reasons. First,
divided into the following categories: hearing the body the amount of retina thinking is reduced in low light
character, hearing the driver's work, hearing the response of conditions; and second, the manifestation may not be visible
vehicles, monitoring the driver's response. if the person has small eyes.

2. Monitoring Physiological Characteristics IV. EXISTING SYSTEM


Among these methods, the best techniques based on
accuracy are those that focus on physical processes in Various techniques are used to identify passenger
humans. This method is used in two ways: tracking changes drowsiness, which can be divided into three categories [4],
in body signals, such as brain waves, heartbeat, and blinking [5], and [6]. The first group focused on vehicle details such
of the eyes; and tracking body adjustments, such as reduced as vehicle location in route monitoring and directional
posture, driver's head leaning, and open / closed eye patterns [7], [8] and [12]; The advantage of this method is
conditions. The first method, although very effective, does that when it is available in the Controller Area Network,
not work, because hearing electrodes will need to be data can be easily accessed without external hardware. The
connected directly to the driver's body, therefore, it will downside to this is that road conditions and driver behavior,
have to be irritating and disturbing. In addition, prolonged depending on measurement parameters, can lead to false
driving will lead to changes in sensitivity, reducing their positives or may result in poor sleep. A little sleep is a small
ability to monitor precision. The second method is best slur of sleep, which is often heard without anyone even
suited to real-world driving situations because it may not knowing it is happening. Anyone who is tired will meet
interfere with getting the correction using visual cues from them, but the people most at risk are those who work night
video cameras. shifts, have sleep problems such as sleepiness or sleep
apnea, or are deprived of sleep. The second group includes
3. Non-intrusive ways of detecting drowsiness methods based on the physical dimensions of the driver [9].
With the analysis of driving wheel movement, To diagnose drowsiness, some researchers have found the
accelerator or brake movements, vehicle speed, lateral following physical symptoms: electrocardiogram (ECG),
acceleration, and lateral movement, the driver's behavior and electromyogram (EMG), electroencephalogram (EEG) and
vehicle behavior can be enforced. There are also non- electrooculogram (EoG) [10], [13]. To force this process,
invasive procedures for acne but are limited to vehicle form the driver's body must have an electrode attached to it. But
and driving conditions. The final process of getting this may not be good for both drivers and car companies. To
drowsiness is to follow the driver's reaction. This means that date this has not been considered successful. The third group
the driver is always asked to send feedback to the device to has methods that focus on following the eye / face of drivers
indicate alertness. The problem with this method is that it [5], [9], [11], and [14]. This paper is based on a third section
will gradually disrupt and care for the driver. that uses eye condition to identify disrespect. The state of
the state is determined using the driver's opinion. The
4. System Configuration of Background and Ambient Light camera is used to record pictures of the driver. Matlab is
Since the eye tracking system focuses on the change in used to get an image of the eye, as well as to analyze it to
intensity on the face, it is important that the reference does see if the shape of the eye is open or closed. This method is
not include an object with a sudden change in intensity. The more effective than others as the driver's sleep can be caught
camera can pick up a high reflective object on the back of with their eyes.
the pilot, and may eventually be mistaken as an eye. Since
this design is a prototype, a managed area of lighting has
been developed for testing. It is also necessary to have less
surrounding light (ambient light), as the only significant
light that illuminates the face will come from the dubbing

IJISRT20NOV620 www.ijisrt.com 1057


Volume 5, Issue 11, November – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. PROBLEM STATEMENT 1. Increasing the x-coordinate starting at (100,240) before
the white pixel is located. It is believed to be the left side
Nowadays, automobiles are getting smarter as new of the face.
technology develops. Cars can understand the roads, 2. When 25 more white pixels accompany the initial white
environment and driver's behavior. Automobiles add new pixel then continue to increase x until a black pixel is
safety features. It has been found that vehicular activities identified.
that contribute to traffic accidents are primarily drowsiness, 3. Calculate the number of black pixels after the pixels
distracted driving, and diversion. According to the National found in step 2, if a sequence of 25 black pixels is
Sleep Foundation's Sleep in America survey, 60 percent of identified it is on the right.
adult drivers (168 million people) Has driven a vehicle when 4. The new starting x-coordinated value (x1) is the middle
it was being realized. In fact, 37 percent (103 million) point of the left and right sides.
people slept at the wheel. Therefore, to avoid accidents and
save the lives of people, the driver celebration warning The top of the head can be found through the new
function is very important. starting point (X1, 240). The algorithm for finding the top of
the head is as follows:
VI. PROPOSED SYSTEM 1. Starting from the starting point, decrease the y-
coordinate (i.e.; move aface-up).
The proposed approach is divided into two major 2. Continue decreasing until you find a black pixel. If y
parts. The first step is to collect the driver's images from the becomes 0 (reaches the top of the image), set it at the top
captured video, identify the area of the face in those images, of the head.
and then assess the area of the eye. The second part is 3. To see if a white pixel follows, check the black pixel.
responsible for extracting relevant features that reflect the i. Continue decreasing y if a large number of white pixels
level of drowsiness of the eye. Binarization is the next step areidentified.
to detect the eyes. Binarization converts the image into a ii. If no white pixels are seen, the top of the head is at the
binary image. The binary image is an image in which only level of the original pixel in black.
two different values are considered for each pixel. In this
case, the values 0 and 1, 0 are black, and 1 are white. It is When located above the driver's head, the sides of the
easy to separate objects from the background with a binary face can also be located. Here are the steps used to find the
image. The grayscale image is converted to a binary image left and right faces.
through a threshold. In all pixels of the original image, the
resulting binary image has values of 0 (black), with zeros for 1. Boost the top Y-coordinate by 10 (found above).
all other pixels and luminosity less than 1 (white). The Etiquettethis
threshold is often determined based on the surrounding light Y1 = Y + Tall.
conditions, and the color of the driver. A threshold value of 2. Find the face center using the stepsbelow:
150 was found to be efficient after studying several i. At point (x1, y1), switch left until you find 25
photographs of different faces under different light consecutive black pixels, thisis
conditions. The criterion used in setting the appropriate To the left (lx).
threshold was on the basis that the binary picture of the ii. At point (x1, y1), move right until you find 25
driver's face would be predominantly white, causing some consecutive white pixels, that is the right side(rx).
dark swelling from the eyes, nose, and / or lips. An ideal for iii. Face center (in x-direction) is: (rx-lx)/2. Label anx2.
the algorithm is an example of a binary image in which the 3. Starting from the point (x2, y1), retrieve the top of the
background for detecting the eye is predominantly black, face. This will activate a new y-coordinate,y2.
while the mask is predominantly white. Which will identify 4. Finally, you can find the edges of the face using the dot
the edges of the face. (x2,y2).
i. Increasing the coordination ofy-.
In the method of eye detection, the next step is to find ii. Move left by subtracting the x-coordinate, when there
out the face top and width of the driver's face. This is are 5 consecutive black pixels You will find the left-hand
important because the outline of the face points down the side, add the x-coordinate to the labeledarray
area where the eyes are, which makes it easier 'The left x.'
(computationally) to determine the position of the eyes. The iii. Move right by adding in the x-coordinate, when there
first step is to find a face top. The first step is to find the are 5 consecutive black pixels; You will see, this is the right
point of departure on the face, then reduce the y-coordinates side, add x-coordinate to the labeled array
until the top of the face is identified. Assuming the person's 'right x'.
face is at the center of the image, the starting point of
departure used is (100,240). The initial x-coordinate of 100 IV. Repeat the steps above 200 times (200 different co-
has been chosen to ensure that the starting point is a black ordinates ofy).
pixel (not a face). The following algorithm describes how to
locate the actual starting point on the face, which is used to
position the top of the face.

IJISRT20NOV620 www.ijisrt.com 1058


Volume 5, Issue 11, November – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
To detect the rectangular portion of the eye Is based on the column length of the image with the
Detector =VisionDetector. CascadeObject (1) black pixels Matrix Indicator
• Summarize the number of black pixels (0) per column
Ireland.
Using the Viola-Jones algorithm, it creates a System • Check if the threshold value is greater than the length.
Object Detect for items such as the nose, mouth, or upper • If the eyes are partly closed the length of the column
body. By default face set is set, then the next step command would be Discounted.
is executed • The eyelid covers the iris while the eyes are closed, and
only the eyelashes turn to a dark pixel. The third method
Rectangular binding numbers [x y lengths]. Rectangle = step which is to obtain the Eye 's Shape (iris) Method has been
(EyeDetect, I) (2) added to remove false positives (i.e. Close or Open). The
binary eye image is divided into two parts, and each of these
The values were changed as follows in this Rectangle eyes is obtained for testing. Its form uses the same method
to prevent as described below.
• This method depends on the part of the eye. The eye has
Rectangular nose and eye glasses a large surface area and narrows as it moves to the left or
x = x+ 10 (3) right. This method detects eye formation by taking a column
with a total number of black pixels. Then scroll left to see if
That transfers 10 pixels from starting point to the right W= there is a decrease in the number of black pixels in the next
w-10 (4) column and see the right side of the same. If at least the two
methods mentioned above lead to eye opening, then the eyes
Trunk 10 pixels at the endpoint. are considered open.

Thus 10 pixels at the left and 10 pixels at the left The right VII. SYSTEMARCHITECTURE
side has been removed.
Y=y+4 (5)

The starting point is moved 4 pixels downwards.


L= L-20 (6)

The next step is Noise removal. Noise removal from a


binary image is very easy. From the top, (x2, y2), turn left to
pixel by subtracting x2, and set each y white (200 y). Repeat
the same on the face on the right. The key to this is to pause
on the edge of the right side face and left side face;
otherwise, details of the facial contours will be lost.

After the black blobs on the face have been removed,


fascial edges are found again. As shown below, secondly
doing so helps you to see the edges of the face better..
VIII. CONCLUSION
Detection Algorithm
Once the image was processed the final step was to Successfully implemented a drowsiness detection
determine if the eyes were closed or not. Many images were algorithm that captures live images from the camera to
tested to understand the difference between open and closed detect differences between the driver's open and closed eyes.
eyes. The system determines/needs the following Three or more consecutive cases of open or closed eyes are
requirements to determine:- considered to experience drowsiness. If this happens it is an
indication that the driver is sleepy and the system gives the
• Ratio of Black and White Pixels driver a warning signal. The machine can detect the shape of
• Column Count Greater than threshold the eyes with or without standard glasses. In some cases, the
• The Rectangle Frame on the Face of the Driver device may give incorrect results due to the effect of light,
the location of the driver. Systems need to be made more
The first process calculates the black and white pixels reliable in future development so that these three variables
in the image and then takes its measurement when the do not become a hindrance to achieving a direct result. To
number of black pixels is 0 and White has the mean 1. When set the limit values for continuous improvement, the system
the eyes are open, the ratio between black and white pixels can be set to default reading at the beginning. In addition,
is higher than when the eyes are closed Instead, the scale is data from one or more objects such as the steering wheel,
used to calculate black pixels, only. If the scale of the image lane time, and accelerator can be readable.
decreases with white and black at the same rate that is it.
Therefore the change in image size does not affect the
effect. The Second Algorithm for Detection.

IJISRT20NOV620 www.ijisrt.com 1059


Volume 5, Issue 11, November – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
REFERENCES [14]. R. Pooneh. And R. Tabrizi, and A. Zoroofi,
“Drowsiness Detection Based on Brightness and
[1]. X. Li, E. Seignez, W. Lu, and P. Loonis, “Vehicle Numeral Features of Eye Image,” Fifth International
Safety Evaluation based on Driver Drowsiness and Conference on IntelligentInformationn Hiding and
Distracted and Impaired Driving Performance Using Multimedia Signal Processing, 2009.
Evidence Theory,” IEEE Intelligent Vehicles
Symposium, vol. IV, Dearborn, Michigan, USA, June
2014..
[2]. G.G. Li, B.-L. Lee, and W.-Y. Chung, “Smartwatch-
Based Wearable EEG System for Driver Drowsiness
Detection,” IEEE SENSORS JOURNAL, vol. 15, no.
12, December 2015.0215.
[3]. http://drowsydriving.org/about/facts-and-stats.
[4]. S. Lawoyin, X. Liu, D.-Y.Fei, and O. Bai, “Detection
Methods for a Low-Cost Accelerometer-Based
Approach for Driver Drowsiness Detection,” IEEE
International Conference on Systems, Man, and
Cybernetics, San Diego, CA, USA, October 2014.
[5]. P. Wang, L. Shen “A Method of Detecting Driver
Drowsiness Stateased on Multi-features of Face,” 5th
International Congress on Image and Signal
Processing (CISP),2012
[6]. C. Papadelis, Z. Chen, C.K. Papadeli, “Monitoring
Sleepiness with Onboard Electrophysiological
Recordings for Preventing Sleep-deprived Traffic
Accidents,” Clinical Neurophysiology, 118 (9):1906-
1922, 2007.
[7]. Y. Takei, and Y. Furukawa, “Estimate of driver’s
fatigue through steering motion,” IEEE International
Conference on Systems, Man and Cybernetics., vol. 2,
pp. 1765–1770,2005..
[8]. J.C. McCall, M.M. Trivedi, D. Wipf, and B. Rao,
“Lane change intent analysis using robust operators
and sparse Bayesian learning,” in CVPR: Proceedings
of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR) -
Workshops, p.59, Washington, DC, USA: IEEE
Computer Society, 2005.
[9]. A. Azman, Q. Meng, and E. Edirisinghe,
Nonintrusivee physiological measurement for driver
cognitive distraction detection: Eye and mouth
movements,” 3rd International Conference on
Advanced Computer Theory and Engineering
(ICACTE), 2010.
[10]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC35718
19/#b32-sensors-12-16937
[11]. W. Han, Y. Yang, G.-B. Huang, O. Sourina, F.
Klanner, and C. Denk, “Driver drowsiness detection
based on novel eye openness recognition method and
unsupervised feature learning,” IEEE International
Conference on Systems, Man, and Cybernetics,2015.
[12]. Z. Li, S.E. Li, R. Li, B. Cheng, and J. Shi, “Online
Detection of Driver Fatigue Using Steering Wheel
Angles for Real Driving Conditions,” sensors.
[13]. A.G. Correa, L. Orosco, and E. Laciar, “Automatic
detection of drowsiness in EEG records based on
multimodal analysis,” Med. Eng. Phys., 36, 244–249,
2014.

IJISRT20NOV620 www.ijisrt.com 1060

You might also like