Driver Alert System Based On Eye Recognition
Driver Alert System Based On Eye Recognition
Chapter 1
INTRODUCTION
Car accident is the major cause of death in which million of people
die. To prevent such accidents we propose a system which alerts the driver if
the driver feels drowsy. Facial landmarks detection is used with help of
image processing of images of the face captured using the camera, for
detection of distraction or drowsiness. Driver fatigue sometimes results in
road accidents every year. It is not easy to estimate the exact amount of sleep
related accidents but research presents that driver fatigue may be a
contributing reason in up to 20% in road accidents. These types of accidents
are about 50% more expected to result in death or serious hurt. They happen
mainly at higher speed impacts. And the driver who has fallen asleep cannot
brake . Drowsiness reduces response time which is a serious element of
secure driving. It also reduces alertness, vigilance, and concentration so that
the capacity to perform attention-based activities i.e. driving is impaired. The
speed at which information is processed is also reduced by drowsiness. The
quality of decision-making may also be affected. It is clear that drivers are
aware when they are feeling sleepy, and so make a conscious decision about
whether to continue driving or to stop for a rest. It may be that those who
persist in driving underestimate the risk of actually falling asleep while
driving. Or it may be that some drivers choose to ignore the risks in the way
drivers drink. Crashes caused by tired drivers are most likely to happen on
long journeys on monotonous roads, such as motorways, between 2pm and
4pm especially after eating or taking an alcoholic drink, between 2am and
6am, after having less sleep than normal, after drinking alcohol, it driver
takes medicines that cause drowsiness and after long working hours or on
journeys home after long shifts, especially night shifts.
1.1Motivation of Project
1.2Problem Statement
The countless number of people drives for long distance every day and night
on the highway. Lack of sleep may lead to an accident. To prevent these type
of accident we propose driver alert system based on eye recognition.
Chapter 2
Scope: Detecting the eye blinks with facial landmarks and OpenCV Facial
landmark detection to localize the eyes in the frame from a video stream.
Compute the eye aspect ratio for each eye, which gives singular value
,relating the distance between vertical eye landmark points to the distance
between horizontal landmark points.The eye aspect ratio will remain
constant when the eyes are open and then will rapidly approach zero during
a blink, then increase increase again as the eyes opens
RAM - 256 GB
Hard Disk - 10 GB
Compiler - gcc
Chapter 3
SYSTEM DESIGN
3.1 Flowchart:
3.2 Algorithm:
➢ compute the eye aspect ratio to determine if the eyes are closed.
➢ If the eye aspect ratio indicates that eyes have been closed for a
sufficiently long enough amount of time.
Chapter 4:
It can be seen from the above table that if sample 1 is not taken into
consideration then the system has an accuracy of nearly 94%. That said the
high amount of errors in sample 1 shows that the system is prone to error and
has certain limitations. In sample we did not use the backlight of the
webcam. The resulting poor lighting conditions gave a highly erroneous
output.
4.2Output (Snapshots):
Read the record video using video reader In opencv, the first step is to
read the video. Using the video reader function with read method to read
video data from a file into the opencv workspace. The video reader supports
some file formats that vary platform to platform. These platforms are like-
All platform those are AVI, including uncompressed, indexed, grayscale,
and Motion JPEG-encoded video. All windows, Macintosh, Linux etc. The
video reader constructs the desired objective to read video data from the file.
The face region is completely recognized. And now next major step is eye
detection. To detect the eyes, previous two resulted images named as
histogram equalized image and threshold image. Using these two images,
the eyes and low intensity portion of face are located for the further process.
The resulted image with locate eye and low intensity . To darken the eye
region some morphological operations like erosion and dilation are used.
Morphology is a method of image processing based on shapes. The value of
each pixel in the output i.e., binary image is based on a comparison of the
equivalent pixel in the input image with its neighbors. By choosing the size
and shape of the neighborhood, it can be constructed a morphological
operation. This operation is sensitive to specific shapes in the input image.
Using morphological functions, there are many image processing tasks like
contrast enhancement, thinning, filling, noise removal, segmentation is
performed. Some morphological operations are erosion, dilation, opening,
closing and boundary extraction. Erosion and dilation are the fundamental
After smoothing using Gaussian filter, it is clearly seen that the region of
eye is detected. To find out the stage of eye whether it is close or open, it is
necessary to locate exact location of eye. To trace the accurate spot of eye,
apply the circular Hough transform function. Figure shows the exact region
of eye. The Hough transform is used to detect lines, circles or other
parametric curves. It is computed on binary image. This function returns the
Hough transform matrix with additional parameters i.e., theta in degrees and
rho. These parameters are the arrays of rho and theta values over which the
Hough transform generates the matrix . The binary image can be logical or
numeric. The image must be real and 2-D which is listed. It has been found
the exact region of eyes. Humans must know by watching the image whether
the eyes are open or closed. But opencv toolbox or software does not know
whether the eyes are open or close. To get the absolute result, it is necessary
the conversation of resulted last image into graphical form. To covert the
image into graphical form, histogram function is used. To represent the
allocation of data in a data set, a histogram is used. Each data point is placed
into a bin based on its value. The histogram is a plot of the number of data
points in each bin. A histogram shows the distribution of data values. For
characterizing the spread of data from repeated trial and for determining the
probability of measurement, histograms are widely used. Histogram
processing is also used for computationally adjusting the contrast. Histogram
method can be applied to numerous problems including color balance or tone
transfer. There are mainly three functions of histogram function. First
function introduces basic pixel operations that hoe they do change
histograms. Showing a number of schemes to stretch a histogram to cover
the full range of pixel values. Second function of histogram is histogram
equalization. This function focuses on spreading out the pixel values
consistently over some range. Last but not least the function of histogram
controls modifying the pixel values. It can be used to match any arbitrary
field of view in order to locate the eyes, and a narrow view in order to detect
fatigue. This system only looks at the number of consecutive frames where
the eyes are closed. At that point it may be too late to issue the warning. By
studying eye movement patterns, it is possible to find a method to generate
the warning sooner. Using 3D images is another possibility in finding the
eyes. The eyes are the deepest part of a 3D image, and this maybe a more
robust way of localizing the eyes. Adaptive binarization is an addition that
can help make the system more robust. This may also eliminate the need for
the noise removal function, cutting down the computations needed to find
the eyes. This will also allow adaptability to changes in ambient light. The
system does not work for dark skinned individuals. This can be corrected by
having an adaptive light source. The adaptive light source would measure the
amount of light being reflected back. If little light is being reflected, the
intensity of the light is increased. Darker skinned individual need much more
light, so that when the binary image is constructed, the face is white, and the
background is black. In the real time driver fatigue detection system it is
required to slow down a vehicle automatically when fatigue level crosses a
certain limit. Instead of threshold drowsiness level it is suggested to design a
continuous scale driver fatigue detection system. It monitors the level of
drowsiness continuously and when this level exceeds a certain value a signal
is generated which controls the hydraulic braking system of the vehicle.
Sometime there is a problem with to achieve the optimum range between
face and camera. When the distance between face and webcam is not at
optimum range then certain problems are arising.When face is too close to
webcam (less than 30 cm), then the system is unable to detect the face from
the image. So it only shows the video as output as algorithm is designed so
as to detect eyes from the face region. This can be resolved by detecting eyes
directly using detect objects functions from the complete image instead of
the face region. So eyes can be monitored even if faces are not detected.
When face is away from the webcam (more than 70cm) then the backlight is
insufficient to illuminate the face properly. So eyes are not detected with
high accuracy which shows error in detection of drowsiness. This issue is not
seriously taken into account as in real time scenario the distance between
drivers face and webcam doesn’t exceed 50cm. so the problem never arises.
Considering the above difficulties, the optimum distance range for
drowsiness detection is set to 40-70 cm. If more than one face is detected by
the webcam, then system gives an erroneous result. So it is required to
develop such system that will give result even though having multiple faces.
The last but not the least future scope is developing a algorithm which can
work with driver having spectacles. This issue has not yet been solved and is
a challenge for almost all eye detection systems.
In the real time driver fatigue detection system it is required to slow down a
vehicle automatically when fatigue level crosses a certain limit. Instead of
threshold drowsiness level it is suggested to design a continuous scale driver
fatigue detection system. It monitors the level of drowsiness continuously
and when this level exceeds a certain value a signal is generated which
controls the hydraulic braking system of the vehicle. Hardware components
requiredDedicated hardware for image acquisition processing and display 47
Interface support with the hydraulic braking system which includes relay,
timer, stepper motor and a linear actuator. Function When drowsiness level
exceeds a certain limit then a signal is generated which is communicated to
the relay through the parallel port(parallel data transfer required for faster
results).The relay drives the on delay timer and this timer in turn runs the
stepper motor for a definite time period .The stepper motor is connected to a
linear actuator. The linear actuator converts rotational movement of stepper
motor to linear motion. This linear motion is used to drive a shaft which is
directly connected to the hydraulic braking system of the vehicle. When the
shaft moves it applies the brake and the vehicle speed decreases. Since it
Dept. of ISE, NHCE 2018-2019 Page 18
Driver Alert System Based on Eye Recognition
References
• https://www.tutorialspoint.com/opencv/
• https://www.opencv-srf.com/p/introduction.html
• https://docs.opencv.org/3.4/d9/df8/tutorial_root.html
• http://dali.feld.cvut.cz/ucebna/matlab/toolbox/images/fspecial.html
• http://wwwrohan.sdsu.edu/doc/matlab/toolbox/images/morph3.htm
l#12508
• http://www.ee.ryerson.ca/~phiscock/thesis/drowsy-
detector/drowsy-detector.pdf
• http://www.cnblogs.com/skyseraph/archive/2011/02/24/196376
5.html
• http://www.scribd.com/doc/15491045/Learning-OpenCV-
Computer-Vision-with-theOpenCV-Library
• http://opencv.willowgarage.com/documentation/reading_and_wr
iting_images_and_vid eo.html
• http://www.scribd.com/doc/46566105/opencv
• Learning OpenCV by Gary Bradski and Adrian Kaehler
• http://note.sonots.com/SciSoftware/haartraining.html