Applsci 11 05963 v2
Applsci 11 05963 v2
sciences
Article
Autonomous Mobile Robot Navigation in Sparse LiDAR
Feature Environments
Phuc Thanh-Thien Nguyen , Shao-Wei Yan, Jia-Fu Liao and Chung-Hsien Kuo *
Abstract: In the industrial environment, Autonomous Guided Vehicles (AGVs) generally run on a
planned route. Among trajectory-tracking algorithms for unmanned vehicles, the Pure Pursuit (PP)
algorithm is prevalent in many real-world applications because of its simple and easy implementation.
However, it is challenging to decelerate the AGV’s moving speed when turning on a large curve
path. Moreover, this paper addresses the kidnapped-robot problem occurring in spare LiDAR
environments. This paper proposes an improved Pure Pursuit algorithm so that the AGV can predict
the trajectory and decelerate for turning, thus increasing the accuracy of the path tracking. To solve
the kidnapped-robot problem, we use a learning-based classifier to detect the repetitive pattern
scenario (e.g., long corridor) regarding 2D LiDAR features for switching the localization system
between Simultaneous Localization And Mapping (SLAM) method and Odometer method. As
experimental results in practice, the improved Pure Pursuit algorithm can reduce the tracking error
while performing more efficiently. Moreover, the learning-based localization selection strategy helps
the robot navigation task achieve stable performance, with 36.25% in completion rate more than only
using SLAM. The results demonstrate that the proposed method is feasible and reliable in actual
Citation: Nguyen, P.T.-T.; Yan, S.-W.; conditions.
Liao, J.-F.; Kuo, C.-H. Autonomous
Mobile Robot Navigation in Sparse Keywords: path planning; pure pursuit controller; trajectory tracking; deep learning; robot kidnap-
LiDAR Feature Environments. Appl. ping detection
Sci. 2021, 11, 5963. [Link]
10.3390/app11135963
factors in Particle Swarm Optimization to prevent local minimum value falling and increase
convergence speed.
Following path planning process, trajectory tracking is required so that the AGV can
track the movement according to a set trajectory path. With the development of technology,
various trajectory-tracking methods have been proposed. Wu et al. [5] introduced a local
linear Model Predictive Control (MPC) to track the nonlinear vehicle model velocity and
path simultaneously. In [6], a reference trajectory is predefined using a sigmoid function.
Then the trajectory is adjusted dynamically by a nonlinear MPC when an obstacle appears
in the predictive horizon. Besides MPC, Yang et al. [7] proposed a Fixed-Time Control
method and a Fixed-Time Sliding Mode Controller to trajectory-tracking control while
meeting the predetermined performance and disturbance suppression. Furthermore, an
adaptive trajectory-following strategy was proposed in [8] that constructs a knowledge
database through the Particle Swarm Optimization (PSO) algorithm to optimize the con-
troller parameters set according to various vehicle speed and heading error combinations.
Meanwhile, Yan et al. [9] proposed a hybrid visual trajectory strategy in which a 2.5D
visual servo framework was used to enhance trajectory-tracking behavior.
Although non-geometric controllers such as MPC can be applied to linear or nonlinear
models with multiple constraints, their limitations are heavy computation and an inability
to provide a closed-form solution when the model is sophisticated. On the other hand,
the Pure Pursuit (PP) algorithm is a popular trajectory-tracking algorithm because of
its simplicity, efficiency, and low computational requirements, even in limited resource
conditions. It computes angular velocity to move the robot from its current position to some
look-ahead point in front of the robot. However, the tracking performance is poor due to
improper selection of the look-ahead distance. Chen et al. [10] combined the PP algorithm
with Proportional Integral (PI) Controller to smooth the final output steering angle through
a low-pass filter and verify its feasibility through simulation experiments. By analyzing the
vehicle speed and the shortest distance between the GPS trajectory and the current vehicle
position, Wang et al. [11] proposed an algorithm that can reduce the lateral error when the
vehicle tracks the ideal path. Meanwhile, a Pure Pursuit algorithm based on the optimized
look-ahead distance (OLDPPA) [12] introduced an adaptive random motion mechanism of
particles in the Salp Swarm Algorithm to improve mining and exploration capabilities.
foggy areas, etc. On the other hand, the Sensor-based odometry method proves their
accuracy and robustness in various scenarios, even in challenging environments.
Currently, modern image classification systems based on deep neural networks, in-
cluding Inception V3 [18] and YOLO V3 [19], are more accurate than traditional machine
learning classification methods. In general, mobile robots are usually equipped with LiDAR
for robot localization due to its accuracy, speed, and 3D reconstruction ability. Therefore, a
deep neural network can extract the features of the LiDAR point-cloud data. For example,
Chen et al. [20] extracted 2D LiDAR features and used SVM to recognize front pedestrians
and track them. However, in the repetitive pattern environment, e.g., in the long corridor,
where LiDAR point clouds are sparse to collect. As the result, it is challenging to localize
precisely the AGV position, leading to mislocalization or the kidnapped-robot problem.
When a mobile robot fails to localize itself due to sparse LiDAR point-cloud, some methods
are developed to relocate AGV’s position. In the SLAM localization system, it localizes
AGV’s position through Monte Carlo Localization (MCL) method, which takes a long time
and is not helpful in broad-space scenarios. Therefore, Wi-Fi fingerprinting was proposed
to solve the problem of robot kidnapping [21], and MCL was integrated with the Fast
Library for Approximate Nearest Neighbors (FLANN) machine learning technology to
solve this problem [22].
1.3. Contributions
Motivated by discussion above, this paper focused on control movement ability of the
AGV on curve path and localizing the AGV on the localization system in the structure-less
environment (long corridor). The main contributions of our work are as follows:
• For trajectory tracking, we adopt the PP algorithm and improves it. The traditional
PP algorithm often causes errors when it encounters a turn because it is overdue to
decelerate speed. Therefore, an improved PP algorithm is proposed that incorporates
turning prediction-based deceleration to reduce the impact caused by late attempts
at deceleration.
• To solve the kidnapped-robot problem, we combine 2D LiDAR point-cloud features
with a deep convolutional network-based classifier to distinguish the current situation
for selecting SLAM or odometry localization system. Thus, if the AGV is in a situation
where SLAM fails to determine robot position, the task can still be continued.
• In addition, practical experiments in the long corridor terrain are carried out to verify
the feasibility of the proposed system.
The remain of this paper is organized as follows. In Section 2, the hardware platform
and vehicle kinematic of robot system are described. In Section 3, the improved Pure
Pursuit algorithm using turning prediction-based speed adjustment is introduced. the deep
learning-based selection strategy using 2D LiDAR point-cloud features for localization task
is discussed Section 4. In Section 5, the practical experimental results and verification of
the proposed method is reported. Finally, in Section 6, the conclusions are presented.
2. Robot System
The mobile robot has four differential wheels that use two motors on both left and
right sides. In addition, the hardware platform is equipped with two LiDAR systems
that can obtain 360-degree point-cloud information in the front and back of the robot
for SLAM [23]. The schematic diagram of our mobile robot hardware platform is shown
in Figure 1.
Appl. Sci. 2021, 11, 5963 4 of 16
x = υ cos θ (1)
y = υ cos θ (2)
υR − υL
ω= (3)
L
υ + υL
υ= R (4)
2
Lω
υL = υ − (5)
2
Lf = kf ∗ υ + Lfm (6)
Appl. Sci. 2021, 11, 5963 5 of 16
where k f is the custom speed weight, υ is the linear velocity of the mobile robot and L f m is
the minimum forward-looking distance limit.
Figure 3. The definition of the target point g (blue circle) in Pure Pursuit algorithm.
The algorithm uses the PD controller to follow the path calculates the angle deviation
according to the current position of the robot and the forward-looking distance point g,
and then keeps the robot moving on the trajectory through the PD controller. The control
structure block diagram of the PD controller is shown in Figure 4. In the PP algorithm, the
forward-looking distance can impact to path tracking accuracy and may cause the mobile
robot to oscillate, shown in Figure 5.
Figure 5. The impact of forward-looking (foresight) distance on generating the tracking error in the
Pure Pursuit algorithm. A longer forward-looking distance represents smoother path tracking, and a
shorter forward-looking distance give accurate tracking, but the PD controller is more challenging
to adjust.
Appl. Sci. 2021, 11, 5963 6 of 16
To improve the PP algorithm, this paper adopts a fixed short forward-looking distance
to increase the trajectory-tracking ability at any time. Then the proposed method judges
the current turn to decelerate it and keeps the angular velocity ω as a deceleration basis
at a certain level so that the mobile robot can better track the path when the path is more
rugged, shown in Figure 7.
Figure 7. The turning decision strategy of the proposed improved forward-looking distance tracking. (a) Predict Turn:
Predicting a front turn or not. (b) Turn Now: Making the turn action. (c) Rugged Route: Making a turn in the case of
rugged route.
We separate the turning decision strategy into three steps. First, we predict the turning
distance using an angle a between two vectors created from the current robot center to the
Appl. Sci. 2021, 11, 5963 7 of 16
* *
two next forward-look points (shown in Figure 7a). Assume that two vectors V1 , V2 of
angle a in the ideal situation (shown in Figure 8) as the following definition:
* *
V2 = 2 × V1 (7)
* ◦
where V1 is the predicted distance, and the ideal angle α is 60 . When the ideal situation
is encountered, the AGV will decelerate. However, when the route has a radius of gyration
R, the angle a will never reach α value and the AGV cannot decelerate. Without considering
* *
R, the maximum value of α can be obtained from V1 and V2 as below:
*
V2
α ≤ cos−1
*
(8)
V1
Figure 8. The curve prediction in the ideal case and the actual case.
From Equation (8), we can clearly define the predicted turning distance with the
corresponding radius of gyration R:
*
V2
−1 R
cos−1
* > tan * (9)
V1 V1
*
V2
−1 R
α ≤ cos−1
* − tan * (10)
V1 V1
In the following step (shown in Figure 7b), to make a turn action, we define a current
turning angle b between two vectors created from the current robot center to the next and
◦
the previous forward-looking points and ensure b ≤ β value, where β = 135 in ideal
case. When the vehicle is traveling on rugged terrain (shown in Figure 7c), this paper
uses γ = 0.1 (rad/s) as the threshold to indicate that the angular velocity ω is too high if
exceeding γ, helping the mobile robot can increase the tracking accuracy.
detection for map building (Figure 9) and localization. However, in a structure-less en-
vironment as a long corridor, SLAM cannot determine its position on the map, leading
to cause unexpected accidents easily. To avoid the mislocalization problem, we use the
characteristics of 2D LiDAR data to recognize where the mobile robot is lost.
Figure 9. The environment map constructed by 2D LiDAR; black indicates obstacles (e.g., walls); white indicates no obstacles
and gray represents unknown areas.
In scan matching, SLAM will match the point clouds with the map features. If the
localization is successful, the point clouds are superimposed on the black edge of the SLAM
map. At this time, we can use all point clouds and the point clouds superimposed on the
black edge of the map to determine whether the AGV is mislocalized as follows:
pmatch
mr = 1 − (11)
p all
where mr represents the missing rate (ranged from 0 to 1). If mr is greater than 50%, most
of the point clouds are not superimposed on the map features. In this case, it can be judged
that the mobile robot is getting lost; otherwise, it represents localization success. p all
represents the number of all point clouds in a frame, and pmatch represents the number of
point clouds in a frame superimposed on the map features.
However, this judgment method cannot detect the localization status under all condi-
tions. when the surrounding environment has s repetitive pattern, the map features will be
too consistent despite the point-cloud is superimposed on the map features. This makes
SLAM localization impossible to confirm. For example, in the corridor part of the map
(Figure 9), the point clouds extracted from the walls on both sides are too sparse, making it
impossible to determine where the mobile robot is in the corridor.
where p pic is the position of the point-cloud on the picture, plidar is the position of
the point-cloud on real world, R pic is the transfer matrix from the LiDAR point-cloud
position to the image point-cloud position and t pic is the offset of the LiDAR point-
cloud position from the image point-cloud position. To convert the real scale to image
pixels, and we set a pixel equal to 0.05 m with r is the image resolution. The point-
cloud range is set within a square of 10 m × 10 m with the center of the mobile robot
as the base, as shown in Figure 10a. Finally, the point-cloud information is drawn on
the two-dimensional picture with the map coordinates (100, 200) as the center of the
mobile robot through a conversion matrix, as shown in Figure 10b.
2. When putting the 2D point-cloud image into the deep neural network for recog-
nition, it is found that if there are people in the image, this will cause noise, and
the recognition performance of the corridor is poor. Therefore, image edge detec-
tion is used to empirically determine the Region of Interests (ROI) of x ≥ 100 and
90 ≤ y ≤ 110 in the range of the image. The ROI content then is filtered noise, as
shown in Figure 11. After image preprocessing, it is put into a deep neural network
to determine corridor area.
3. For the corridor recognition network, we use 2 different InceptionV3 [18] and LeNet-
5 [24] architectures. Despite having impressive performance in classification tasks,
most deep neural networks require powerful hardware support for their heavy com-
putation, leading to difficulties deploying the deep learning method into edge devices
such as AGV. In this paper, we choose the lightweight deep neural networks, which
have a small number of parameters but still give a good performance, to implement
on our system. In the Inception V3-based corridor classification model, we apply the
fine-tuning approach to adopt ImageNet for speeding up the training phase and the
model accuracy. Moreover, we also define a lightweight model, inspiring by LeNet.
The proposed LeNet-inspired model, shown in Table 1, has fewer parameters than
the InceptionV3-based model but keeps a good classification performance.
4. When a long corridor area is detected by the trained deep neural networks, the AGV
avoids the mislocalization problem by switching the SLAM localization system into
the IMU-based Odometer localization system.
Figure 10. (a) Mobile robot receives point-cloud range. (b) Point-cloud is drawn on picture.
5. Experimental Results
This paper includes three main experiments to verify the performance of the improved
Pure Pursuit algorithm and the effectiveness of the LiDAR point-cloud feature-based deep
learning classifier for switching localization systems. The first part is a trajectory-tracking
accuracy experiment. The second part is a trajectory-tracking speed experiment. The third
part verifies the deep learning-based classifier to recognize long corridor terrain using the
LiDAR point-cloud feature for switching localization systems.
Figure 12. The path error comparison between MPC (purple), original PP (blue) and proposed improved PP (green) methods
in Double-L-shaped path (red): (a) Trajectory comparison chart. (b) MPC trajectory error path. (c) Original PP trajectory
error graph. (d) Improved PP trajectory error graph.
Figure 13. The path error comparison between MPC (purple), original PP (blue) and proposed improved PP (green) methods
in S-shaped path (red): (a) Trajectory comparison chart. (b) MPC trajectory error path. (c) Original PP trajectory error graph.
(d) Improved PP trajectory error graph.
Appl. Sci. 2021, 11, 5963 12 of 16
Table 2. Results of Trajectory-Tracking Accuracy Experiment. MPC stands for Model Predictive
Control method, PP stands for Pure Pursuit method.
Figure 14. The speed comparison between MPC, original PP and proposed improved PP methods in Double-L-shaped path:
(a) MPC speed curve. (b) Original PP speed curve. (c) Improved PP speed curve.
Figure 15. The speed comparison between MPC, original PP and proposed improved PP methods in S-shaped path: (a) MPC
speed curve. (b) Original PP speed curve. (c) Improved PP speed curve.
Appl. Sci. 2021, 11, 5963 13 of 16
Table 3. Results of Trajectory-Tracking Speed Experiment. MPC stands for Model Predictive Control
method, PP stands for Pure Pursuit method.
5.3. Verifying the Deep Learning-Based Localization Switching Method to Solve Corridor Effect
The location of the experiment is a corridor at the National Taiwan University of
Science and Technology, as shown in Figure 16. The red line is the ground truth of the
experiment. Marks are spaced every 5 m, and the total length is 88 m.
Models have better accuracy results in the test dataset. The LeNet-based model
has only about 1.2 million parameters for the model size comparison, while its accu-
racy is similar to the bigger InceptionV3-based model. This guarantees that our pro-
posed deep-learning-based model can be deployed on AGV and give a reliable and
effective performance.
Figure 17. The tracked trajectory comparison. The red line is the ground truth, The blue line is the SLAM method, and the
green line is our method’s trajectory method.
Figure 18. The complete trajectory tracking of our proposed method in practice.
6. Conclusions
To improve the trajectory-tracking accuracy of the original Pure Pursuit algorithm
when following the turning path, we propose an improved Pure Pursuit algorithm that
adds the functions of predicting the next turn and adjusting speed in the current turn. In
structure-less environment AGV localization, this paper introduces a deep-learning-based
corridor area classifier using 2D LiDAR data to select a suitable localization system to solve
Appl. Sci. 2021, 11, 5963 15 of 16
the corridor effect. The practical experimental results verified that the maximum error of
the modified Pure Pursuit is within 45 mm, with a 77% improvement rate compared to
the original Pure Pursuit. The improved Pure Pursuit algorithm also increased the speed
by more than 5.6%. Moreover, the proposed localization switching method using deep
learning helps to increase 36.25% of completion rate higher than that only using SLAM
localization, prove the robust effectiveness of the proposed method in practice.
Author Contributions: P.T.-T.N. is responsible for revising papers, design deep-learning based lo-
calization switching method, verify experiments; S.-W.Y. is responsible for designing SLAM and
Odometry localization, design deep-learning based localization switching method; J.-F.L. is respon-
sible for path planning part and improve PurePursuit trajectory tracking method; C.-H.K. is the
advisor who orients research direction for the paper, gives comments and advices to do this research.
All authors have read and agreed to the published version of the manuscript.
Funding: Ministry of Science and Technology, Taiwan under Grant MOST 109-2221-E-011-112-MY3
and the “Center for Cyber-physical System Innovation” from The Featured Areas Research Center
Program within the framework of the Higher Education Sprout Project by the Ministry of Education
(MOE) in Taiwan.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Chang, L.; Shan, L.; Li, J.; Dai, Y. The Path Planning of Mobile Robots Based on an Improved A* Algorithm. In Proceedings of
the 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Banff, AB, Canada, 9–11 May 2019;
pp. 257–262.
2. Zeng, Z.; Sun, W.; Wu, W.; Xue, M.; Qian, L. An Efficient Path Planning Algorithm for Mobile Robots. In Proceedings of the 2019
IEEE 15th International Conference on Control and Automation (ICCA), Edinburgh, UK, 16–19 July 2019; pp. 487–493.
3. Li, Y.; Huang, Z.; Xie, Y. Path planning of mobile robot based on improved genetic algorithm. In Proceedings of the 2020 3rd
International Conference on Electron Device and Mechanical Engineering (ICEDME), Suzhou, China, 1–3 May 2020; pp. 691–695.
4. Zhang, L.; Zhang, Y.; Li, Y. Mobile Robot Path Planning Based on Improved Localized Particle Swarm Optimization. IEEE Sens. J.
2020, 21, 6962–6972. [CrossRef]
5. Wu, H.; Si, Z.; Li, Z. Trajectory Tracking Control for Four-Wheel Independent Drive Intelligent Vehicle Based on Model Predictive
Control. IEEE Access 2020, 8, 73071–73081. [CrossRef]
6. Li, S.; Li, Z.; Yu, Z.; Zhang, B.; Zhang, N. Dynamic trajectory planning and tracking for autonomous vehicle with obstacle
avoidance based on model predictive control. IEEE Access 2019, 7, 132074–132086. [CrossRef]
7. Yang, T.; Li, Z.; Yu, J. Trajectory Tracking Control of Surface Vehicles: A Prescribed Performance Fixed-Time Control Approach.
IEEE Access 2020, 8, 209441–209451. [CrossRef]
8. Amer, N.H.; Hudha, K.; Zamzuri, H.; Aparow, V.R.; Abidin, A.F.Z.; Kadir, Z.A.; Murrad, M. Adaptive Trajectory Tracking
Controller for an Armoured Vehicle: Hardware-in-the-loop Simulation. In Proceedings of the 2018 57th Annual Conference of the
Society of Instrument and Control Engineers of Japan (SICE), Nara, Japan, 11–14 September 2018; pp. 462–467.
9. Yan, F.; Li, B.; Shi, W.; Wang, D. Hybrid visual servo trajectory tracking of wheeled mobile robots. IEEE Access 2018, 6, 24291–24298.
[CrossRef]
10. Chen, Y.; Shan, Y.; Chen, L.; Huang, K.; Cao, D. Optimization of Pure Pursuit Controller based on PID Controller and Low-pass
Filter. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7
November 2018; pp. 3294–3299.
11. Wang, W.-J.; Hsu, T.-M.; Wu, T.-S. The improved pure pursuit algorithm for autonomous driving advanced system. In Proceedings
of the 2017 IEEE 10th International Workshop on Computational Intelligence and Applications (IWCIA), Hiroshima, Japan, 11–12
November 2017; pp. 33–38.
12. Wang, R.; Li, Y.; Fan, J.; Wang, T.; Chen, X. A Novel Pure Pursuit Algorithm for Autonomous Vehicles Based on Salp Swarm
Algorithm and Velocity Controller. IEEE Access 2020, 8, 166525–166540. [CrossRef]
13. Yousif, K.; Bab-Hadiashar, A.; Hoseinnezhad, R. An Overview to Visual Odometry and Visual SLAM: Applications to Mobile
Robotics. Intell. Ind. Syst. 2015, 1, 289–311. [CrossRef]
14. Rozenberszki, D.; Majdik, A.L. LOL: Lidar-only Odometry and Localization in 3D point cloud maps. In Proceedings of the 2020
IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4379–4385.
Appl. Sci. 2021, 11, 5963 16 of 16
15. Zhao, S.; Zhang, H.; Wang, P.; Nogueira, L.; Scherer, S. Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for
Challenging Environments. arXiv 2021, arXiv:2104.14938.
16. Cho, H.; Kim, E.K.; Kim, S. Indoor SLAM application using geometric and ICP matching methods based on line features. Robot.
Auton. Syst. 2018, 100, 206–224. [CrossRef]
17. Millane, A.; Oleynikova, H.; Nieto, J.; Siegwart, R.; Cadena, C. Free-Space Features: Global Localization in 2D Laser SLAM Using
Distance Function Maps. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
Macau, China, 3–8 November 2019; pp. 1271–1277.
18. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June
2016; pp. 2818–2826.
19. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767.
20. Chen, J.; Ye, P.; Sun, Z. Pedestrian Detection and Tracking Based on 2D Lidar. In Proceedings of the 2019 6th International
Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 421–426.
21. Luo, R.C.; Hsiao, T.J. Kidnapping and Re-Localizing Solutions for Autonomous Service Robotics. In Proceedings of the
IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018;
pp. 2552–2557.
22. Luo, R.C.; Yeh, K.C.; Huang, K.H. Resume navigation and re-localization of an autonomous mobile robot after being kidnapped.
In Proceedings of the 2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Washington, DC, USA,
21–23 October 2013; pp. 7–12.
23. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International
Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278.
24. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86,
2278–2324. [CrossRef]
25. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [CrossRef]