Pornpanomchai 2009
Pornpanomchai 2009
III. METHODOLOGY
This part will introduce our approach to create
vehicle speed detection from a video scene system. We Figure 2. Structure chart of vehicle speed detection system
will start with the overall framework of the system and the
3.2.1 Image Acquisition
description of each component in the framework and the
We have decided to use Microsoft Direct Show
basic understanding of the technique we are using in each
library as our tool to receive the input to the system.
component.
Microsoft Direct Show provides a technology called Filter
Graph Manager, which performs as an unformatted-based
3.1 Overview of Vehicle Speed Detection Framework
video streaming input. Using the Filter Graph Manager,
The hardware requirement for the vehicle speed
we have no need to worry about the format or the source
detection system is shown in Figure 1(a). The system
of the media. Filter Graph performs at the device driver
consists of the normal IBM/PC connected to the un-
level in order to stream multimedia data through the media
calibrated camera. Our input of the system must be the
system. The filter graph provides a structure for
scene of a moving vehicle. The scenes have to be known
multimedia filters used specifically by the automotive
of distance frame, which consists of the starting point and
platform. The filter graph is constructed of 3 filter types,
end point and the moving vehicle as displayed in Figure
which are source filter, decoder filter and the render filter.
1(b). Basic idea of the system is to calculate the vehicle
Those 3 filters perform as low level media driver to
speed from known distance and time when the vehicle first
receive, process and provide the same data format for all
passes starting point and the time the vehicle finally
media to the output level. Our Image Acquisition
reaches end point.
component is in charge of calling the filter graph,
grabbing the single frame from the video stream, and
buffering each single image to the memory storage.
136
Regarding to the variety of input format, color is one the image segmentation process, the process will decide
of the key factors, which have a great impact on the which is the starting point and end point. The result of the
system. The image color in each input format can be up to process will be the position of starting point and ending
36 millions colors and that means difficulty of the point, which will be used in the speed detection.
analyzing process. To reduce this difficulty, Gray-Scaling
has been brought to the process. Making the colored
3.2.5 Speed Detection
image to be the gray level image means that we have cut
From the previous processes, which have already
off the number of million levels of colors. Images with 36
provided us the position of each single vehicle in the
million-color levels can be transformed into 24 levels of
image frame and also the position of mark points found in
colors without losing the abstraction.
the reference frame. The speed of the vehicle in each
image will be calculated using the position of the vehicle
3.2.3 Image Segmentation
together with position of reference points and the given
For this operation, we are talking about image
time stamp. From each calculation that we have
segmentation for the moving vehicle. To segment the
proceeded, the summary will be made as almost the final
moving vehicle from the images sequence, we have
step to give us the average speed of the vehicle since it
decided to use the image differentiation approach.
first appears between the 2 mark points until it moves out
Regarding the image enhancement process, all images in
of the range. Figure 3 shows more a visual explanation on
the image sequences must pass through the image
the algorithm for finding the vehicle speed.
enhancement, which means all that those images are the
gray-scaled images. The first image from gray-scaled
image sequences has been selected as the reference frame.
Next step is to subtract all images in the sequences with
the reference frame we have chosen. The result of
subtraction gives us that there are the movements in the
binary image form. Our approach to determine the vehicle
position is to find the biggest area in the vertical space.
We are declaring the biggest area in vertical as the
prospective vehicle entry point. From the newly Figure 3. Diagram displaying all the variables used in the speed
discovered entry point, the region-growing method has detection process
been applied. The region-growing method gives us the
area of the real vehicle. The area of vehicle will be saved Based on the diagram to calculate vehicle speed in
into the memory data structure called vehicle coordinate. Figure 3, we can write the equations to find our vehicle
speed as shown below.
3.2.4 Image Analysis
Distance between vehicle and starting point measured in
Image Analysis process is responsible for finding the kilometer
position of mark-points in the reference frame. The gray-
scaled reference frame, which has been received from the Distance = Dƒ * (D / D x ) * (P n – P 0 ) …(1)
image enhancement process, is used as the input of this
process. Refer to the framework in Figure 1(a) and Figure Time that vehicle spent in order to move to P n in unit of hour
1(b), the mark-point must be in the dark shade line, so that Time = Tƒ * (t n – t 0 ) ... (2)
the image thresholding method can be used to distinguish
the mark-point from the background. After the Vehicle speed measured in format of kilometer per hour
thresholding process has been applied to the reference
Speed = Distance / Time (Kilometer per Hour) … (3)
frame, we will have the binary image containing only two
mark-points in the black color with white background. Where D is the real distance between two marking points
The binary image in this step will be inverted and sent to (start point and end point) measured in meter
the image segmentation process to find the boundary of D x is the distance between two marking points measured
the vehicle itself. The result of the segmentation process in pixels
X is the width of the video scene measured in pixels
will be the 1st mark-point, because the segmentation will Y is the height of the video scene measured in pixels
determine the biggest area in the vertical space as the P 0 is the right most of the vehicle position at time t=0
vehicle coordinate. So the next step that needs to be measured in unit of pixels
performed is to populate the new image without the 1st P n is the right most of the vehicle position at time t=n
measured in unit of pixels
mark-point. The newly populated image will be sent to the t0 is the tickler (timestamp) saved at time t = 0 measured
image segmentation process to find the 2nd mark-point. in unit of milliseconds
When both mark-point positions have been received from
137
tn is the tickler (timestamp) saved at time t = n measured image frame, the movement of the vehicle recorded by the
in unit of milliseconds
camera and the complexity of the background.
Dƒ is the distance conversion factor from meter to
kilometer, which is
(1.00/(1000.00*60.00*60.00))
Tƒ is the time conversion factor. In this case, the
conversion is from millisecond to hour, which is
(1.00/1000.00)
3.2.6 Report
Report process is the last process, which provides the
end-user readable result of calculation. The format of
output can be either the text description or the chart
displaying the speed of the vehicle when it passes the
mark point.
138
V. CONCLUSION support only the single moving vehicle in the video scene.
Based on the previous section, our experimentation To have more than one vehicle moving in the same scene
has been made to achieve 3 objectives. For first objective, can cause the system to provide an erroneous result.
which is usability test, we can obviously say that the Stability of the camera - This is the limitation that we
system can absolutely qualify this goal. The system is have proposed as our specification. But in reality, there
capable of detecting the speed of the moving vehicle in a are still some risks that the camera can be shaken or
video scene. Our concerns in order to improve the system moved for many unexpected reasons. For this current
are the remainder of 2 objectives, which are the system, if some of these errors happen, the system can
performance and effectiveness of the system. give the wrong result.
Based on the experimental results, the correctness of Speed of the vehicle - The speed of the vehicle that we
the system is still based too much on the form of the data. want to detect is based on our hardware speed. To capture
At this point, we have analyzed the experimental result in the fast-moving vehicle, this definitely requires faster
order to consider the important factors, which can affect hardware speed.
the system correctness, as shown below.
REFERENCES
The complexity of the background - We have defined
this issue as our specification. But in the real-life
[1] Yong-Kul Ki and Doo-Kwon Baik, “Model for Accurate Speed
application, the video scene cannot be fixed. The system Measurement Using Double-Loop Detectors”, The IEEE
has to be able to process on any kind of the background, transactions on Vehicular Technology, Vol 55, No 4, page 1094-
even the non-static background. 1101, July 2006.
Size of the video scene - One of the important factors, [2] Joel L. Wilder, Aleksandar Milenkovic and Emil Jovanov, “Smart
Wireless Vehicle Detection System”, The 40th Southeastern
which can have effect an on the effectiveness of the Symposium on System Theory, University of New Orleans, Page
system, is the size of the video scene. The larger image 159-163, March 2008.
provides more processing information than the smaller [3] J.Pelegri, J.Alberola and V.Llario, “Vehicle Detection and Car
one. Speed Monitoring System using GMR Magnetic Sensors”, The
IEEE Annual Conference in the Industrial Electronics Society
Size of the vehicle – with regard to our approach to (IECON 02), Page 1693-1695, November 2002.
determine the position of the vehicle, we are using the [4] Ryusuke Koide, Shigeyuki Kitamura, Tomoyuki Nagase, Takashi
image differentiation to differentiate the vehicle from the Araki, Makoto Araki and Hisao Ono, “A Punctilious Detection
static background. This is working well as long as the size Method for Measuring Vehicles’ Speeds”, The International
of the vehicle is not too small. A very small vehicle can Symposium on Intelligent Signal Processing and Communication
System (ISPACS 2006), Yonago Convention Center, Tottori,
cause the system to be unable to differentiate between the Japan, Page 967-970, 2006.
vehicle and noise. When this happens, the detection [5] Harry H.Cheng, Benjamin D. Shaw, Joe Palen, Bin Lin, Bo Chen
process can be wrong. and Zhaoqing Wang, “Development and Field Test of a Laser-
Fixed characteristic of the marking point - Marking Based Nonintrusive Detection System for Identification of
Vehicles on the Highway”, The IEEE Transactions on Intelligent
points are something very important to the speed Transaction Systems, Vol.6, No.2, Page 147-155, June 2005.
calculation process. To give the wrong characteristic [6] Z.Osman and S.Abou Chahine, “Novel Speed Detection Scheme”,
marking points, the system may not be able to recognize The 14th International Conference on Microelectronics (ICM
the correct position of the marking point. 2002), Page 165-168, December 2002.
Stability of the brightness level - To process the video [7] Jianxin Fang, Huadong Meng Hao Zhang and Xiqin Wang, “A
Low-cost Vehicle Detection and Classification System based on
scene under unstable brightness level means that we are Unmodulated Continuous-wave Radar”, The IEEE International
working on the different images and backgrounds in every Conference on Intelligent Transportation Systems”, Seattle, USA,
sampling image. The result of doing that may be an Page 715-720, October 2007.
unexpected error on the detection process. [8] Huei-Yung Lin and Kun-Jhih Li, “Motion Blur Removal and Its
Application to Vehicle Speed Detection”, The IEEE International
Number of colors from the input video - The Gray Conference on Image Processing (ICIP 2004), Page 3407-3410,
Scaling process is using just a simple algorithm. This October 2004.
means the Gray Scaling process is not designed for too [9] Shisong Zhu and Toshio Koga, “Feature Point Tracking for Car
many color levels like 1.6 million-color image. Speed Measurement”, The IEEE Asia Pacific Conference on
Circuits and Systems (APCCAS 2006), Page 1144-1147,
The direction of the moving vehicle - Based on the December 2006.
experimentation result, we have tried to move the vehicle [10] Bram Alefs and David Schreiber, “Accurate Speed Measurement
to reverse direction. The result of doing this is that the from Vehicle Trajectories using AdaBoost Detection and Robust
detection process is given the negative number of vehicle Template Tracking”, The IEEE International Conference on
Intelligent Transportation Systems, Seattle, USA, Page 405-412,
speed. October 2007.
Limitation of the number of the vehicles in each scene [11] S. Pumrin and D.J. Dailey, “Roadside Camera Motion Detection
- We have proposed this as the limitation in the for Automated Speed Measurement”, The IEEE 5th International
specification. The system has been implemented to Conference on Intellegent Transportation Systems”, Singapore,
Page 147-151, September 2002.
139