0% found this document useful (0 votes)
13 views5 pages

Pornpanomchai 2009

Uploaded by

Manikanta Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views5 pages

Pornpanomchai 2009

Uploaded by

Manikanta Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2009 IEEE International Conference on Signal and Image Processing Applications

Vehicle Speed Detection System


Chomtip Pornpanomchai Kaweepap Kongkittisan
Department of Computer Science, Mahidol University
Rama 6 Road, Rajchatawee, Bangkok 10400
THAILAND
[email protected] [email protected]

Abstract shift occurs when sound is generated by, or reflected off


of, a moving vehicle. Doppler shift in the extreme creates
This research intends to develop the vehicle speed sonic booms and when those sound waves bounce back to
detection system using image processing technique. Overall the wave generator, the frequency of the sound will be
works are the software development of a system that requires a changed, and scientists use that variation to calculate the
video scene, which consists of the following components: speed of a moving vehicle. However, this technique still
moving vehicle, starting reference point and ending reference has some disadvantages such as the cost of equipment,
point. The system is designed to detect the position of the
which is the most important reason to find other
moving vehicle in the scene and the position of the reference
points and calculate the speed of each static image frame from
compensating equipment that can reduce the cost of
the detected positions. The vehicle speed detection from a video investment. Image processing technology can serve this
frame system consists of six major components: 1) Image requirement.
Acquisition, for collecting a series of single images from the Image processing is the technology, which is based
video scene and storing them in the temporary storage. 2) on the software component that does not require the
Image Enhancement, to improve some characteristics of the special hardware. With a typical video recording device
single image in order to provide more accuracy and better and a normal computer, we can create a speed detection
future performance. 3) Image Segmentation, to perform the device. By using the basic scientific velocity theory, we
vehicle position detection using image differentiation. 4) Image
can calculate the speed of a moving vehicle in the video
Analysis, to analyze the position of the reference starting point
and the reference ending point, using a threshold technique. 5) scene from the known distance and time, which the
Speed Detection, to calculate the speed of each vehicle in the vehicle has moved beyond.
single image frame using the detection vehicle position and the Few image processing key methodologies have been
reference point positions, and 6) Report, to convey the applied to this project. The image differentiation is used in
information to the end user as readable information. the vehicle detection process, image thresholding for the
The experimentation has been made in order to assess segmentation process and region filling to find the vehicle
three qualities: 1) Usability, to prove that the system can boundaries. However, the project is still in the prototype
determine vehicle speed under the specific conditions laid out. mode, which requires more and more research and
2) Performance, and 3) Effectiveness. The results show that the
development in order to overcome the system limitation
system works with highest performance at resolution 320x240.
It takes around 70 seconds to detect a moving vehicle in a video and enhance the performance of software to be able to
scene. perform to real-world application.

Keywords- Vehicle Speed Detection, Video Frame II. LITERATURE REVIEWS


Differentiation
Many researchers try to apply so many techniques to
detecting and measuring vehicles speed. All techniques
I. INTRODUCTION are based on the hardware equipment and computer
The idea of using the video camera to measure the software as follows:
vehicle speed has been proposed to improve the current
speed detection approach, which is based too much on the 2.1 Electronic Hardware & Computer Software
radar equipment. The use of radar equipment to detect the Yong-Kul Ki et al.[1] used double-loop detectors
speed has been widely spread into the different kinds of hardware and Visual C++ software to measure vehicles
industries. But somehow the equipment itself has some speed in Korea. Joel L. et al.[2], J. Pelegri, et al.[3] and
disadvantages, which cannot be fixed no matter how the Ryusuke Koide et al.[4] proposed magnetic sensors
technology has been improved so far, as long as the combined with computer software to detecting vehicles
equipment is still based on the radar approach. speed. Harry H. Cheng et al.[5] used Laser-based non-
The way the radar operates is known as Doppler shift intrusive detection system to measure vehicles speed in a
phenomenon. We probably experience it daily. Doppler real-time. Z. Osman, et al.[6] applied microwave signal to
detect vehicles speed. Jianxin Fang et al. [7] used

978-1-4244-5561-4/09/$26.00 ©2009 135


continuous-wave radar to detect, classify and measure Based on the structure chart in Figure 2, our system
speed of the vehicles. consists of 6 major components, which are 1) Image
Acquisition, 2) Image Enhancement, 3) Image
2.2 Image Processing & Computer Software Segmentation, 4) Image Analysis, 5) Speed Calculation,
Huei-Yung Lin and Kun-Jhih li [8] used blur images to and 6) Report. Each component has the following details.
measure the vehicles speed. Shisong Zhu et al. [9]
proposed car speed measurement from the traffic video
signal. Bram Alefs and David Schreiber [10] applied the
AdaBoost detection and Lucas Kanade template matching
techniques to measuring vehicles speed. S. Pumrin and
D.J. Dailey [11] presented a methodology for automated
speed measurement.
Our system will use a video recorder to record traffic in a
video scene. After that we will use a distance
measurement to calculate a vehicle speed.

III. METHODOLOGY
This part will introduce our approach to create
vehicle speed detection from a video scene system. We Figure 2. Structure chart of vehicle speed detection system
will start with the overall framework of the system and the
3.2.1 Image Acquisition
description of each component in the framework and the
We have decided to use Microsoft Direct Show
basic understanding of the technique we are using in each
library as our tool to receive the input to the system.
component.
Microsoft Direct Show provides a technology called Filter
Graph Manager, which performs as an unformatted-based
3.1 Overview of Vehicle Speed Detection Framework
video streaming input. Using the Filter Graph Manager,
The hardware requirement for the vehicle speed
we have no need to worry about the format or the source
detection system is shown in Figure 1(a). The system
of the media. Filter Graph performs at the device driver
consists of the normal IBM/PC connected to the un-
level in order to stream multimedia data through the media
calibrated camera. Our input of the system must be the
system. The filter graph provides a structure for
scene of a moving vehicle. The scenes have to be known
multimedia filters used specifically by the automotive
of distance frame, which consists of the starting point and
platform. The filter graph is constructed of 3 filter types,
end point and the moving vehicle as displayed in Figure
which are source filter, decoder filter and the render filter.
1(b). Basic idea of the system is to calculate the vehicle
Those 3 filters perform as low level media driver to
speed from known distance and time when the vehicle first
receive, process and provide the same data format for all
passes starting point and the time the vehicle finally
media to the output level. Our Image Acquisition
reaches end point.
component is in charge of calling the filter graph,
grabbing the single frame from the video stream, and
buffering each single image to the memory storage.

3.2.2 Image Enhancement


We first experimented with a couple of algorithms in
order to improve our image quality to process in the next
steps, such as noise reduction, image smoothing and so
(a) (b)
on. But the experimental result came out not very well,
Figure 1. (a) Diagram showing the required because all those methodologies were time consuming. So
system hardware to gather the input (b) The video scene structure
we have cut off some operations, which are not useful to
our analyzing process. The remaining are 2 operations,
3.2 Vehicle Speed Detection System Structure Chart which are Image Scaling and Gray Scaling.
To provide a deeper understanding of the details in Image Scaling is used in order to provide the
each operation of the vehicle speed detection system, we possibility of having the various sizes of input formats.
firstly introduce the structure of the system as shown in Understanding the format of the images helps us to
Figure 2. And we will then elaborate on how each determine the time that will be used to process each single
working module is constructed. image and display to the output device.

136
Regarding to the variety of input format, color is one the image segmentation process, the process will decide
of the key factors, which have a great impact on the which is the starting point and end point. The result of the
system. The image color in each input format can be up to process will be the position of starting point and ending
36 millions colors and that means difficulty of the point, which will be used in the speed detection.
analyzing process. To reduce this difficulty, Gray-Scaling
has been brought to the process. Making the colored
3.2.5 Speed Detection
image to be the gray level image means that we have cut
From the previous processes, which have already
off the number of million levels of colors. Images with 36
provided us the position of each single vehicle in the
million-color levels can be transformed into 24 levels of
image frame and also the position of mark points found in
colors without losing the abstraction.
the reference frame. The speed of the vehicle in each
image will be calculated using the position of the vehicle
3.2.3 Image Segmentation
together with position of reference points and the given
For this operation, we are talking about image
time stamp. From each calculation that we have
segmentation for the moving vehicle. To segment the
proceeded, the summary will be made as almost the final
moving vehicle from the images sequence, we have
step to give us the average speed of the vehicle since it
decided to use the image differentiation approach.
first appears between the 2 mark points until it moves out
Regarding the image enhancement process, all images in
of the range. Figure 3 shows more a visual explanation on
the image sequences must pass through the image
the algorithm for finding the vehicle speed.
enhancement, which means all that those images are the
gray-scaled images. The first image from gray-scaled
image sequences has been selected as the reference frame.
Next step is to subtract all images in the sequences with
the reference frame we have chosen. The result of
subtraction gives us that there are the movements in the
binary image form. Our approach to determine the vehicle
position is to find the biggest area in the vertical space.
We are declaring the biggest area in vertical as the
prospective vehicle entry point. From the newly Figure 3. Diagram displaying all the variables used in the speed
discovered entry point, the region-growing method has detection process
been applied. The region-growing method gives us the
area of the real vehicle. The area of vehicle will be saved Based on the diagram to calculate vehicle speed in
into the memory data structure called vehicle coordinate. Figure 3, we can write the equations to find our vehicle
speed as shown below.
3.2.4 Image Analysis
Distance between vehicle and starting point measured in
Image Analysis process is responsible for finding the kilometer
position of mark-points in the reference frame. The gray-
scaled reference frame, which has been received from the Distance = Dƒ * (D / D x ) * (P n – P 0 ) …(1)
image enhancement process, is used as the input of this
process. Refer to the framework in Figure 1(a) and Figure Time that vehicle spent in order to move to P n in unit of hour
1(b), the mark-point must be in the dark shade line, so that Time = Tƒ * (t n – t 0 ) ... (2)
the image thresholding method can be used to distinguish
the mark-point from the background. After the Vehicle speed measured in format of kilometer per hour
thresholding process has been applied to the reference
Speed = Distance / Time (Kilometer per Hour) … (3)
frame, we will have the binary image containing only two
mark-points in the black color with white background. Where D is the real distance between two marking points
The binary image in this step will be inverted and sent to (start point and end point) measured in meter
the image segmentation process to find the boundary of D x is the distance between two marking points measured
the vehicle itself. The result of the segmentation process in pixels
X is the width of the video scene measured in pixels
will be the 1st mark-point, because the segmentation will Y is the height of the video scene measured in pixels
determine the biggest area in the vertical space as the P 0 is the right most of the vehicle position at time t=0
vehicle coordinate. So the next step that needs to be measured in unit of pixels
performed is to populate the new image without the 1st P n is the right most of the vehicle position at time t=n
measured in unit of pixels
mark-point. The newly populated image will be sent to the t0 is the tickler (timestamp) saved at time t = 0 measured
image segmentation process to find the 2nd mark-point. in unit of milliseconds
When both mark-point positions have been received from

137
tn is the tickler (timestamp) saved at time t = n measured image frame, the movement of the vehicle recorded by the
in unit of milliseconds
camera and the complexity of the background.
Dƒ is the distance conversion factor from meter to
kilometer, which is
(1.00/(1000.00*60.00*60.00))
Tƒ is the time conversion factor. In this case, the
conversion is from millisecond to hour, which is
(1.00/1000.00)

3.2.6 Report
Report process is the last process, which provides the
end-user readable result of calculation. The format of
output can be either the text description or the chart
displaying the speed of the vehicle when it passes the
mark point.

IV. EXPERIMENTAL RESULT


In this section, the experimentation result will be
presented in order to prove whether vehicle speed Figure 4. The captured screen of the 9th frame
detection from a video scene system is applicable. We
first present the experimentation result, which Table 1 Display the result of programmatic detection of a 640 X 480
resolution data, containing the moving toy car
demonstrates how to use our system to capture the speed
of the moving vehicle in video scene. The next part Programmatic Detection
intends to demonstrate the effectiveness testing of the Resolution 640 X 480
Object Type : Car
system, which will show how accuracy is provided by the Motion: Normal
Frame Screen Position (Pixel)
system. Finally, we end up with the performance test to Number Shot Left Right Top Bottom Time Stamp (Milliseconds) Speed (Km / Hrs)
report how fast the system can perform. 1
2
A1.1.1
A1.1.2
2
2
41 355
71 345
386
399
7351250
7351296
0.00
3.92
3 A1.1.3 2 94 338 431 7351343 3.43
4.1 Usability Proof 4 A1.1.4 21 112 339 423 7351390 3.05
5 A1.1.5 41 142 339 433 7351453 2.99
This experimentation begins with the computer 6 A1.1.6 119 245 195 319 7351500 4.90
software analysis window. The screen shows a list of 7
8
A1.1.7
A1.1.8
41 214 327
121 240 342
439
455
7351546
7351593
3.51
3.49
image frames captured from the video scene. Our input 9 A1.1.9 136 288 331 457 7351656 3.66
10 A1.1.10 232 345 158 320 7351703 4.03
data in this experimentation is the radio-controlled toy car, 11 A1.1.11 248 430 119 327 7351750 4.68
12 A1.1.12 284 390 339 471 7351796 3.84
which moves from the left to right side of the scene. 13 A1.1.13 294 454 344 478 7351859 4.08
Figure 4 represents the moving vehicle in frame number 14 A1.1.14 367 503 347 478 7351906 4.23
15 A1.1.15 379 520 146 372 7351953 4.10
9th when it first appears in the scene until it reaches the 16 A1.1.16 402 590 83 311 7352000 4.40
17 A1.1.17 523 638 355 478 7352062 4.42
ending mark point. For every single frame, the vehicle 18 A1.1.18 539 638 256 478 7352109 4.18
position has been detected together with the frame 19 A1.1.19 583 638 374 468
Average Speed (Km / Hrs)
7352156 3.96
3.73
timestamp. With this information plus the position of the Table 2 Show the result of performance using moving car scene for
starting mark point and ending mark points, simple each resolution
Processing Time (Milliseconds)
manual calculation can be made in order to find the Turn Around Detecting Marking Time Per
vehicle speed on each frame. Table 1 demonstrates the Resolution Background Type Time Point Frame
640 X 480 Plain Background ~ 781,000 ~ 4,500 ~ 43,000
video frame number with the timestamp and car speed in 320 X 240 Plain Background ~ 78,000 ~ 1,100 ~ 5,000
each frame. The final line of the table shows the average
speed of a car. 4.3 Performance Test
4.2 Effectiveness Test In this stage, our main concern about the performance
This stage, the effectiveness test, will be done under test is the execution time of the system, while performing
the same infrastructure as the usability proof. But the the automatic analysis process. With the same idea as we
result of experimentation is changed to more focus on the have presented in the previous section, the testing process
correctness of the result. Our experimentation has been will be done under the different circumstance regarding
made under the different scenarios. We have created the the different factor that we have considered it as the cause
different testing scenarios by summarizing the list of the of performance impact. For convenience sake, we have
factors, which we have considered it as the factor that can used the same factors and scenarios, which have been
have an impact on the system correctness, such as the used in the effectiveness test. Table 2 shows the result of
image resolution, size of the vehicle per size of the whole performance test.

138
V. CONCLUSION support only the single moving vehicle in the video scene.
Based on the previous section, our experimentation To have more than one vehicle moving in the same scene
has been made to achieve 3 objectives. For first objective, can cause the system to provide an erroneous result.
which is usability test, we can obviously say that the Stability of the camera - This is the limitation that we
system can absolutely qualify this goal. The system is have proposed as our specification. But in reality, there
capable of detecting the speed of the moving vehicle in a are still some risks that the camera can be shaken or
video scene. Our concerns in order to improve the system moved for many unexpected reasons. For this current
are the remainder of 2 objectives, which are the system, if some of these errors happen, the system can
performance and effectiveness of the system. give the wrong result.
Based on the experimental results, the correctness of Speed of the vehicle - The speed of the vehicle that we
the system is still based too much on the form of the data. want to detect is based on our hardware speed. To capture
At this point, we have analyzed the experimental result in the fast-moving vehicle, this definitely requires faster
order to consider the important factors, which can affect hardware speed.
the system correctness, as shown below.
REFERENCES
The complexity of the background - We have defined
this issue as our specification. But in the real-life
[1] Yong-Kul Ki and Doo-Kwon Baik, “Model for Accurate Speed
application, the video scene cannot be fixed. The system Measurement Using Double-Loop Detectors”, The IEEE
has to be able to process on any kind of the background, transactions on Vehicular Technology, Vol 55, No 4, page 1094-
even the non-static background. 1101, July 2006.
Size of the video scene - One of the important factors, [2] Joel L. Wilder, Aleksandar Milenkovic and Emil Jovanov, “Smart
Wireless Vehicle Detection System”, The 40th Southeastern
which can have effect an on the effectiveness of the Symposium on System Theory, University of New Orleans, Page
system, is the size of the video scene. The larger image 159-163, March 2008.
provides more processing information than the smaller [3] J.Pelegri, J.Alberola and V.Llario, “Vehicle Detection and Car
one. Speed Monitoring System using GMR Magnetic Sensors”, The
IEEE Annual Conference in the Industrial Electronics Society
Size of the vehicle – with regard to our approach to (IECON 02), Page 1693-1695, November 2002.
determine the position of the vehicle, we are using the [4] Ryusuke Koide, Shigeyuki Kitamura, Tomoyuki Nagase, Takashi
image differentiation to differentiate the vehicle from the Araki, Makoto Araki and Hisao Ono, “A Punctilious Detection
static background. This is working well as long as the size Method for Measuring Vehicles’ Speeds”, The International
of the vehicle is not too small. A very small vehicle can Symposium on Intelligent Signal Processing and Communication
System (ISPACS 2006), Yonago Convention Center, Tottori,
cause the system to be unable to differentiate between the Japan, Page 967-970, 2006.
vehicle and noise. When this happens, the detection [5] Harry H.Cheng, Benjamin D. Shaw, Joe Palen, Bin Lin, Bo Chen
process can be wrong. and Zhaoqing Wang, “Development and Field Test of a Laser-
Fixed characteristic of the marking point - Marking Based Nonintrusive Detection System for Identification of
Vehicles on the Highway”, The IEEE Transactions on Intelligent
points are something very important to the speed Transaction Systems, Vol.6, No.2, Page 147-155, June 2005.
calculation process. To give the wrong characteristic [6] Z.Osman and S.Abou Chahine, “Novel Speed Detection Scheme”,
marking points, the system may not be able to recognize The 14th International Conference on Microelectronics (ICM
the correct position of the marking point. 2002), Page 165-168, December 2002.
Stability of the brightness level - To process the video [7] Jianxin Fang, Huadong Meng Hao Zhang and Xiqin Wang, “A
Low-cost Vehicle Detection and Classification System based on
scene under unstable brightness level means that we are Unmodulated Continuous-wave Radar”, The IEEE International
working on the different images and backgrounds in every Conference on Intelligent Transportation Systems”, Seattle, USA,
sampling image. The result of doing that may be an Page 715-720, October 2007.
unexpected error on the detection process. [8] Huei-Yung Lin and Kun-Jhih Li, “Motion Blur Removal and Its
Application to Vehicle Speed Detection”, The IEEE International
Number of colors from the input video - The Gray Conference on Image Processing (ICIP 2004), Page 3407-3410,
Scaling process is using just a simple algorithm. This October 2004.
means the Gray Scaling process is not designed for too [9] Shisong Zhu and Toshio Koga, “Feature Point Tracking for Car
many color levels like 1.6 million-color image. Speed Measurement”, The IEEE Asia Pacific Conference on
Circuits and Systems (APCCAS 2006), Page 1144-1147,
The direction of the moving vehicle - Based on the December 2006.
experimentation result, we have tried to move the vehicle [10] Bram Alefs and David Schreiber, “Accurate Speed Measurement
to reverse direction. The result of doing this is that the from Vehicle Trajectories using AdaBoost Detection and Robust
detection process is given the negative number of vehicle Template Tracking”, The IEEE International Conference on
Intelligent Transportation Systems, Seattle, USA, Page 405-412,
speed. October 2007.
Limitation of the number of the vehicles in each scene [11] S. Pumrin and D.J. Dailey, “Roadside Camera Motion Detection
- We have proposed this as the limitation in the for Automated Speed Measurement”, The IEEE 5th International
specification. The system has been implemented to Conference on Intellegent Transportation Systems”, Singapore,
Page 147-151, September 2002.

139

You might also like