Robot Gaussian-Historical Relocalization: Inertial Measurement unit-LiDAR Likelihood Field Matching
Robot Gaussian-Historical Relocalization: Inertial Measurement unit-LiDAR Likelihood Field Matching
Corresponding Author:
Min Kang
School of Automation and Electrical Engineering, Zhejiang University of Science and Technology
Hangzhou, Zhejiang, China
Email: [email protected]
1. INTRODUCTION
A key prerequisite for mobile robots to accomplish autonomous navigation is positioning, which
entails figuring out the robot’s posture in its operational environment. Tasks such as mapping, obstacle
avoidance, area coverage, and target tracking rely on accurate localization. However, one major challenge is
the “robot kidnapping” problem [1], where a robot is suddenly relocated to an unknown position without any
prior information, causing a loss of localization. To ensure the safe and effective operation of robots in real-
world applications such as industrial automation, service robotics, and autonomous driving, it is crucial to
address the relocalization problem after the robot loses its positioning [2].
Particle filtering and visual simultaneous localization and mapping (Visual SLAM) are two popular
methods in robot relocalization. Visual SLAM uses camera image data to locate robots and create maps, and
is strongly affected by changes in lighting [3]–[5]. Particle filtering uses a group of particles to estimate the
robot’s state, but it is computationally demanding, particularly in large-scale environments [6]–[10]. These
limitations, such as Visual SLAM’s sensitivity to illumination changes and particle filtering’s high
computational costs, underscore the need for more reliable and effective relocalization techniques.
At present, many scholars have researched relocalization and achieved fruitful results. For example,
Hu et al. [11] proposed a fusion positioning method combining Wi-Fi, LiDAR, and maps based on Kalman
filtering. This method addresses two issues: the poor performance of single Wi-Fi positioning in indoor
environments and incorrect LiDAR positioning due to ineffective feature extraction in specific scenarios.
However, it requires substantial computational resources. Wang et al. [12] proposed a global positioning
method based on the principles of laser odometry and improved adaptive Monte Carlo. They used laser
odometry based on the point-to-line iterative closest point (PL-ICP) method to replace the traditional odometry
and introduced the idea of deoxyribonucleic acid (DNA) crossover and mutation in genetics into the particle
iteration process of AMCL. However, the iterative closest point (ICP) obtains the optimal approximate
position solution through iteration. Its heavy calculation often causes the robot’s state to change before getting
the position, resulting in a large cumulative error. Lan et al. [13] proposed narrow field of view (NFOV) error
recognition to determine the status of base stations through the sliding window technique and standard
deviation threshold, eliminating abnormal data. Graph optimization fusion positioning combines ultra-wide
band (UWB) measurement values and odometer information to obtain accurate robot trajectories through
optimization algorithms. UWB signals may be affected by environmental interference, affecting the reliability
of measurement results. Wang et al. [1] suggested a residual network (ResNet)-based robot relocalization
technique that combined coarse and fine matching to greatly increase positioning success rate and efficiency.
Even though current approaches have some benefits, they still have drawbacks like a slow relocalization
response and high computational complexity. Conventional approaches show limited accuracy in dynamic
obstacle detection under challenging conditions such as lighting variations and object occlusion, which
prevents them from meeting operational requirements in complex environments. Moreover, they often fail to
effectively integrate historical pose data and inertial measurement unit (IMU) measurements to enable robust
relocalization in dynamic scenarios, resulting in reduced adaptability and slow recovery after disruptions.
In order to overcome these limitations, this study proposes a novel LiDAR-IMU fusion-based
relocalization technique. The method fuses IMU yaw angles with historical pose data to effectively compress
the search space, and it utilizes curvature and normal vector analysis for feature extraction, significantly
reducing data volume while preserving most critical environmental information. Furthermore, we propose a
frame-difference-based dynamic obstacle detection approach for accurate dynamic environment discrimination,
along with an adaptive likelihood field matching algorithm that dynamically optimizes computational
resource allocation according to environmental complexity for LiDAR-to-map matching. The experimental
results demonstrate that the proposed method substantially improves system responsiveness and environmental
adaptability, providing an effective solution for robot relocalization in complex dynamic scenarios.
2. RELOCALIZATION APPROACH
The relocalization implementation block diagram is shown in Figure 1. It mainly consists of a data
input layer, processing module, and output layer. The data input layer includes data sources such as historical
coordinates, grid maps, IMU, and LiDAR, responsible for collecting various raw data to provide basic
information for the system. The processing module is the core part of the relocalization implementation, it
performs analytical processing and data fusion on the input signals to determine the optimal robot pose
estimation. The output layer then produces optimal pose estimation based on the processing module’s results,
determining the robot’s position and orientation.
The covariance matrix 𝜮, a 3 × 3 matrix for pose data, describes data distribution and
1
inter-dimensional correlations. Its elements are calculated as𝛴𝑗𝑘 = ∑𝑛 (𝑝 − 𝜇𝑗 )(𝑝𝑖𝑘 − 𝜇𝑘 ) for
𝑛−1 𝑖=1 𝑖𝑗
𝑗, 𝑘 ∈ {1,2,3} corresponding to x, y, and 𝜃. Expanded, the covariance matrix is given by (1),
2
𝜎𝑥𝑥 𝜎𝑥𝑦 𝜎𝑥𝜃
2
𝜮 = [𝜎𝑦𝑥 𝜎𝑦𝑦 𝜎𝑦𝜃 ] (1)
2
𝜎𝜃𝑥 𝜎𝜃𝑦 𝜎𝜃𝜃
2 2 2
where 𝜎𝑥𝑥 and 𝜎𝑦𝑦 are x and y variances, 𝜎𝜃𝜃 is the 𝜃 variance, and 𝜎𝑥𝑦 = 𝜎𝑦𝑥 , 𝜎𝑥𝜃 = 𝜎𝜃𝑥 , 𝜎𝑦𝜃 = 𝜎𝜃𝑦
are the respective covariances. The covariance matrix is dynamically adjusted to ensure
2
𝜎𝑥𝑦 = 𝑚𝑎𝑥( 𝜎𝑥𝑥 2 2
, 𝜎𝑦𝑦 ) ≤ 0.25𝑚2 for translation variance and 𝜎𝜃2 = 𝜎𝜃𝜃
2
≤ 0.09𝑟𝑎𝑑 2 for rotation variance,
2 2 2 2 2 2
with adjustments 𝜎𝑥𝑥 = 𝑚𝑖𝑛( 𝜎𝑥𝑥 , 0.25), 𝜎𝑦𝑦 = 𝑚𝑖𝑛( 𝜎𝑦𝑦 , 0.25), 𝜎𝜃𝜃 = 𝑚𝑖𝑛( 𝜎𝜃𝜃 , 0.09). In a Gaussian
distribution, the 3σ range, covering about 99.7% of the data, defines the effective search region. For two-
dimensional translation (x and y), it’s an ellipse obtained from the covariance matrix’s eigenvalue
2 2
decomposition, and for one-dimensional rotation (𝜃), it is the interval [𝜇𝜃 − 3√𝜎𝜃𝜃 , 𝜇𝜃 + 3√𝜎𝜃𝜃 ]. When a
𝑇
new pose 𝑝𝑛𝑒𝑤 = [𝑥𝑛𝑒𝑤 , 𝑦𝑛𝑒𝑤 , 𝜃𝑛𝑒𝑤 ] is observed, the Mahalanobis distance 𝑑𝑀 is calculated to measure its
deviation from the historical Gaussian model. The formula for the Mahalanobis distance is given by (2).
The system initiates the relocalization process if the Mahalanobis distance 𝑑𝑀 surpasses a predetermined
threshold T, signifying a notable departure from the historical distribution.
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450
IAES Int J Rob & Autom ISSN: 2722-2586 441
The resulting pre-integrated quantities include the relative rotation increment 𝛥𝑅ij , the relative
velocity increment 𝛥𝑣ij , and the relative position increment 𝛥𝑝ij . After removing the effects of gravity and
sensor bias, the IMU pre-integration can be approximated by (3) to (5).
𝑗−1
𝛥𝑅𝑖𝑗 ≈ ∏𝑘=𝑖 𝑒𝑥𝑝((𝜔𝑘 − 𝑏𝑔 )𝛥𝑡) (3)
𝑗−1
𝛥𝑣𝑖𝑗 ≈ ∑𝑘=𝑖 𝑅𝑘 (𝑎𝑘 − 𝑏𝑎 )𝛥𝑡 (4)
𝑗−1 1
𝛥𝑝𝑖𝑗 ≈ ∑𝑘=𝑖 [𝑣𝑘 𝛥𝑡 + 𝑅𝑘 (𝑎𝑘 − 𝑏𝑎 )𝛥𝑡 2 ] (5)
2
Here, 𝜔𝑘 and 𝑎𝑘 denote the gyroscope and accelerometer measurements at time 𝑘, respectively, 𝑏𝑔 and 𝑏𝑎
represent the gyroscope and accelerometer biases, and 𝑅𝑘 is the rotation matrix at time 𝑘.
In situations such as robot kidnapping, where the robot’s prior pose estimate becomes invalid, IMU
pre-integration can provide a relatively stable motion prior to assist LiDAR in pose initialization. This
enables rapid relocalization. Furthermore, the high-frequency nature of IMU measurements allows the system
to continuously capture orientation changes over short time intervals, thereby improving its responsiveness to
abrupt motion.
The neighborhood point set for the i-th scan point is defined as 𝒩𝑖 . For each neighboring point 𝑗 ∈
𝒩𝑖 , we employ an indicator function 𝕀(𝛥𝑟𝑗𝑚 > 𝑇) to evaluate whether the inter-frame difference 𝛥𝑟𝑗𝑚
exceeds threshold T. The function returns 1 when 𝛥𝑟𝑗𝑚 > 𝑇, and 0 otherwise.
If ∑𝑗∈𝒩𝑖 𝕀 (𝛥𝑟𝑗𝑚 > 𝑇) ≥ 1, the i-th scan point is preliminarily identified as potentially dynamic.
However, a special case requires consideration before final determination.
In real-world scenarios, when a LiDAR beam switches from a dynamic obstacle to a static
background due to object movement, the point may falsely trigger dynamic detection. To resolve this, we
analyze both distance change patterns (sudden increase followed by stabilization) and scan angles to
distinguish true dynamic points from revealed static features. This approach prevents misclassification when
∑𝑗∈𝒩𝑖 𝕀 (𝛥𝑟𝑗𝑚 > 𝑇) ≥ 1 occurs due to obstacle displacement rather than actual dynamics, substantially
reducing false positives in the detection system.
1
𝐶 = ∑𝑘𝑖=1(𝑝𝑖 − 𝑝̄ ) (𝑝𝑖 − 𝑝̄ )𝑇 (7)
𝑘
Among them, 𝑝𝑖 is the point in the neighborhood, and 𝑝̄ is the centroid of the neighborhood. Then,
perform eigenvalue decomposition on the covariance matrix C to obtain three eigenvalues 𝜆1 , 𝜆2 , and 𝜆3 . The
curvature is calculated by the ratio of the minimum eigenvalue to the sum of the eigenvalues, as shown in (5).
Points with larger curvature usually correspond to corner points or edges in the environment.
𝑚𝑖𝑛(𝜆1 ,𝜆2 ,𝜆3 )
𝑘= (8)
𝜆1 +𝜆2 +𝜆3
The parameters of a circle are fitted by the least squares method [18]. Suppose the equation of a
circle is (𝑥 − 𝑎)2 + (𝑦 − 𝑏)2 = 𝑟 2 . Where (𝑎, 𝑏) are the coordinates of the center of the circle and 𝑟 is the
radius. For the points (𝑥𝑧 𝑖 , 𝑦𝑧 𝑖 ) (represents the global coordinate system coordinates of the points projected
𝑡 𝑡
onto the map from the points scanned by the laser) in the given point cloud data. Substitute them into the
equation of the circle to construct a series of equations about 𝑎, 𝑏 and 𝑟. Then, minimize the sum of the
squared distances from the points to the circle, that is, solve for the values that make (6) reach its minimum.
2
2 2
𝑚𝑖𝑛 ∑𝑛𝑖 [√(𝑥𝑧 𝑖 − 𝑎) + (𝑦𝑧 𝑖 − 𝑏) 𝑎 − 𝑟 2 ] (9)
𝑎,𝑏,𝑟 𝑡 𝑡
To mitigate the excessive influence of long straight lines during data processing, a strategy of
weakening their weight is adopted. This method guarantees the efficient retention of environmental
characteristic information. The positioning algorithm can more effectively identify potential positions by
using the processed data for probability inference, which improves the system’s overall accuracy and
robustness.
Among them, 𝑧𝑡𝑖 is the i-th LiDAR measurement value at time, 𝑥𝑡 is the pose of the robot at time, and
𝑚 is the map. The likelihood field model blurs the obstacles in the workspace, making it applicable to
various spatial situations with smoother and more efficient results. Its core idea is to regard the points on the
grid as forming a magnetic field that attracts the surrounding point clouds, and the attraction decays with the
square of the distance (or Gaussian decay can be used) [24], [25]. The endpoints obtained by LiDAR
measurement in the global coordinate system are projected. At time t, the posture of the robot is
𝑥𝑡 = (𝑥, 𝑦, 𝜃)𝑇 , the installation position of the LiDAR relative to the center coordinates of the robot is
𝑇
(𝑥𝑘,𝑠𝑒𝑛𝑠 𝑦𝑘,𝑠𝑒𝑛𝑠 ) , the angle of the laser beam of the LiDAR relative to the orientation of the robot is 𝜃𝑘,𝑠𝑒𝑛𝑠 ,
the coordinates of the laser-measured endpoints relative to the center of the LiDAR is 𝑧𝑡𝑘 , and the coordinates
of the points scanned by the laser projected onto the global coordinate system of the map is (𝑥𝑧 𝑘 𝑦𝑧 𝑘 ), as
𝑡 𝑡
shown in Figure 2, their relationships are described by (8). This technique offers a more dependable method
for robot relocalization while successfully addressing the drawbacks of the beam model. The robot uses this
formula to update its position estimate in the global coordinate system based on its current posture and sensor
measurements, achieving precise positioning.
During relocalization initialization, the system constructs a 3σ-constrained search region on the 2D
plane by leveraging real-time IMU yaw angle measurements (𝜃). This approach reduces the LiDAR heading
search range to ±30° (an 83.3% reduction compared to full 360° search) while initializing the positional
2 2
search space using historical trajectory variance constraints (𝜎𝑥𝑥 , 𝜎𝑦𝑦 ≤ 0.25𝑚2 )
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450
IAES Int J Rob & Autom ISSN: 2722-2586 443
For each candidate pose [𝑥, 𝑦, 𝜃]𝑇 , the system processes LiDAR data through dynamic obstacle
removal and static feature extraction, then calculates the likelihood of matching between environmental features
(edges/planes) and pre-built map elements. The iterative optimization maximizes the likelihood function
𝑝(𝑧𝑡 |𝑥𝑡 , 𝑚), where 𝑧𝑡 denotes current observations, 𝑥𝑡 represents candidate poses, and 𝑚is the static map.
If the maximum matching score within the initial search region falls below the predefined threshold,
the system activates an adaptive spiral expansion strategy as shown in Figure 3. Taking the initial search
center as the origin, it dynamically adjusts the search step size using 𝐿𝑘 = 𝐿0 ⋅ 𝑟 𝑘 (where 𝐿0 = 0.5 𝑚 is the
initial step size and 𝑟 = 0.8 is the decay rate), extending the search area in a spiral pattern outward. The
feature matching and optimization procedure is repeated following each expansion until a pose that satisfies
the requirements is discovered. The final candidate pose is validated twice: first, it is examined for validity
within the boundaries of the map and for the absence of collisions; second, IMU pre-integration is used to
confirm that the motion constraints are consistent. Pose validation completes the relocalization process by
publishing the pose to the output layer.
(a) (b)
Figure 4. LiDAR data comparison: (a) LiDAR data after dynamic obstacle detection and feature extraction
and (b) raw LiDAR data
180
𝜃𝑒𝑟𝑟 = |𝜃̂ − 𝜃| × (13)
𝜋
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450
IAES Int J Rob & Autom ISSN: 2722-2586 445
(a) (b)
(c) (d)
Figure 5. OpenLORIS-Scene dataset environments: (a) Cafe1-1 series, (b) Cafe1-2 series, (c) Corridor1-1
series, and (d) Corridor1-2 series
Figure 6 shows box plots of relocalization experiment results comparing different algorithms across
multiple evaluation metrics. A box-plot is a statistical graph for describing the discrete degree of a group of
data. The stability of the relocalization can be reflected by the box-plot. The horizontal line inside the box of
a box plot represents the average value. As shown in Figure 6(a), the box plots of our algorithm demonstrate
a relatively lower overall position (average error: 0.146 m) and smaller data dispersion. According to
comparative analysis, our approach outperforms the LiDAR-dependent AMCL algorithm (0.321 m average
error) by 54.7% in terms of overall positional error while achieving positioning accuracy on par with ORB-
SLAM3 (0.150 m average error). But as Figure 6(b) illustrates, our algorithm’s angular error noticeably rises
when compared to ORB-SLAM3, especially in corridor datasets. The main causes of this performance
degradation are (1) the LiDAR system’s 180° scanning range in the OpenLORIS-Scene dataset and (2) the
glass surfaces that are common in corridor environments, which significantly reduce the number of
detectable LiDAR feature points, consequently increasing the dispersion of angular measurements.
Further analysis incorporating Figure 6(c) and Figure 6(d) reveals that our algorithm achieves an
average CPU utilization of just 0.40 units, representing an 84.7% reduction compared to ORB-SLAM3 (2.60
units) and a 58.9% improvement over AMCL (0.97 units). In terms of processing time, our algorithm
completes relocalization in 1.30 seconds on average; only 15% slower than ORB-SLAM3’s 1.13 seconds
while being 74.4% faster than AMCL’s 5.08-second runtime. The lower position and compact dispersion of
its box plots clearly demonstrate that the proposed method consumes fewer computational resources while
maintaining stable performance during relocalization processes. These characteristics confirm the algorithm’s
higher computational efficiency and faster relocalization capability.
(a) (b)
(c) (d)
Figure 6. Relocalization experiment data of datasets-comparison of different algorithms: (a) position error,
(b) angle error, (c) CPU load, and (d) time cost
(a) (b)
(c)
Figure 7. Relocalization test environment: (a) test environment, (b) test environment map, and (c) robot
mobile
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450
IAES Int J Rob & Autom ISSN: 2722-2586 447
This study conducted a series of “robot kidnapping” tests to evaluate the performance of three
algorithms: the proposed method, ORB-SLAM3, and AMCL. To enhance the comparative analysis under
dynamic conditions, the experimental environment incorporated moving obstacles that accounted for
approximately 15% of both LiDAR scans and camera frames. Additional challenging scenarios were
introduced at specific locations: glass interference at Point C and reduced illumination at Point F. The
experimental evaluation involved conducting comprehensive robot kidnapping tests across multiple scenarios
(A→B, B→C, C→D, D→E, and E→F), with systematic performance comparisons measured through four
key metrics: positioning accuracy (mean error and success rate documented in Table 1) and computational
efficiency (processing time and CPU utilization presented in Table 2).
The proposed algorithm demonstrates superior relocalization accuracy due to IMU-based heading
constraints, achieving an 89.6% reduction in mean yaw error compared to AMCL. Furthermore, the overall
success rate is improved by 47% relative to AMCL. The conventional AMCL approach is prone to premature
particle convergence, where particles erroneously concentrate in locally optimal—but globally incorrect—
regions, leading to suboptimal pose estimation.
In Scenario C with glass interference, the proposed method maintains a positional error of merely
0.109 m with a 60% success rate, whereas AMCL exhibits significantly higher errors and a drastically
reduced success rate of 10%, rendering it nearly ineffective. These results underscore the critical role of the
Gaussian 3σ constraint modeling for historical poses in enhancing robustness.
While ORB-SLAM3 achieves an 80% success rate in Scenario C, its performance significantly
degrades under low-light conditions (Scenario F), exhibiting a 26.9% increase in positional error and a 31.3%
decline in success rate (from 80% to 55%). In contrast, the proposed algorithm maintains stable performance
in Scenario F, demonstrating its illumination-invariant reliability. This consistent performance effectively
highlights the inherent advantage of LiDAR’s insensitivity to lighting variations.
The computational efficiency evaluation demonstrates that the proposed method achieves significant
improvements across all metrics compared to baseline algorithms. Relative to AMCL (97.9% CPU
utilization), the proposed solution shows a 68.1% improvement in localization speed (1.68 s vs 5.27 s) while
reducing CPU usage by 60.8% (38.4% vs 97.9%). When compared to ORB-SLAM3 (266.3% CPU
utilization), it maintains a 28.5% speed advantage with an 85.6% reduction in computational load (38.4% vs
266.3%). Although processing demands increase in feature-dense scenarios (C and F) due to extensive
feature extraction and long-range relocalization requirements, the algorithm’s optimized LiDAR processing
pipeline and efficient spiral search strategy enable it to consistently outperform AMCL in both accuracy and
resource utilization. These results validate the system’s ability to maintain high computational efficiency
while delivering robust performance across diverse operating conditions.
Robot Gaussian-historical relocalization: inertial measurement unit-LiDAR … (Ye-Ming Shen)
448 ISSN: 2722-2586
The experimental results demonstrate the effectiveness of the proposed IMU-LiDAR likelihood
field matching framework, which achieves three key performance benchmarks: i) an 84% success rate in
robot kidnapping recovery scenarios, ii) an average relocalization time of 1.68 seconds, and iii) a CPU
utilization rate of only 38.4%. These quantitative metrics collectively verify that the proposed method
significantly improves both localization accuracy and system robustness in challenging real-world operating
conditions, while maintaining exceptional computational efficiency.
4. CONCLUSION
Resolving the initial pose estimation and “robot kidnapping” issues is fundamental to enabling
autonomous robot navigation, a critical capability for applications in industrial automation, service robotics,
and autonomous driving. This study proposes a Robot Gaussian-Historical Relocalization algorithm based on
IMU-LiDAR likelihood field matching, achieving significant improvements in localization accuracy,
robustness, and computational efficiency. By leveraging IMU heading constraints and historical pose priors,
it effectively reduces yaw error and increases relocalization success rates, particularly in challenging
scenarios with dynamic obstacles or poor lighting. Compared to AMCL and ORB-SLAM3, the method not
only offers faster and more reliable relocalization but also operates with substantially lower CPU usage.
These results validate the algorithm’s suitability for real-world mobile robot applications, especially in
environments where traditional vision- or particle-based methods struggle. Nevertheless, the method faces
challenges in environments with weak geometric features, repetitive structures, or severe sensor interference
such as fog, dust, or reflections.
Future work will first address the limitations caused by sensor noise and failure. We plan to conduct
a detailed sensitivity analysis on the impact of errors in historical poses and sensor measurements, and
explore adaptive uncertainty modeling to enhance robustness under challenging conditions. In parallel, we
will explore machine learning-based methods such as CNNs and Transformers to enhance LiDAR feature
extraction in ambiguous environments. Once a more robust single-robot system is established, we will extend
the method to multi-robot scenarios, enabling shared relocalization for better scalability and coordination.
ACKNOWLEDGMENTS
The authors would like to thank the technical staff at Professor Jia-Qiang Yang’s laboratory, located
in Room 1803 of the Hanggang Metallurgical Technology Building, for their assistance in setting up the
experimental environment and data collection.
FUNDING INFORMATION
Authors state there is no funding involved.
Name of Author C M So Va Fo I R D O E Vi Su P Fu
Ye-Ming Shen ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Min Kang ✓ ✓ ✓ ✓ ✓ ✓
Jia-Qiang Yang ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Zhong-Hou Cai ✓ ✓ ✓ ✓ ✓
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450
IAES Int J Rob & Autom ISSN: 2722-2586 449
INFORMED CONSENT
We have obtained informed consent from all individuals included in this study.
ETHICAL APPROVAL
The research related to human use has been complied with all the relevant national regulations and
institutional policies in accordance with the tenets of the Helsinki Declaration and has been approved by the
authors’ institutional review board or equivalent committee.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author, Ye-
Ming Shen, upon reasonable request.
REFERENCES
[1] G. Wang, B. Shi, L. Wang, D. Song, and X. Jia, “Robot relocalization combining residual network and iterative closest point,”
Journal of Xi’an Polytechnic University, vol. 38, no. 4, pp. 18–25, 2024, doi: 10.13338/j.issn.1674-649x.2024.04.003.
[2] H. Ma, W. Yan, and L. Yang, “Research on depth vision based mobile robot autonomous navigation in underground coal mine,”
Journal of China Coal Society, vol. 45, no. 6, pp. 2193–2206, 2020, doi: 10.13225/j.cnki.jccs.ZN20.0214.
[3] J. D. Peña-Narvaez, F. Martín, J. M. Guerrero, and R. Pérez-Rodríguez, “A visual questioning answering approach to enhance
robot localization in indoor environments,” Frontiers in Neurorobotics, vol. 17, Nov. 2023, doi: 10.3389/fnbot.2023.1290584.
[4] Z. Zheng, K. Su, S. Lin, Z. Fu, and C. Yang, “Development of vision–based SLAM: from traditional methods to multimodal
fusion,” Robotic Intelligence and Automation, vol. 44, no. 4, pp. 529–548, Jul. 2024, doi: 10.1108/RIA-10-2023-0142.
[5] J. Bao, S. Yang, R. Zhu, H. Tang, and A. Song, “Indoor mobile robot relocalization method combining vision and laser,” China
Meas. Test, vol. 47, no. 11, pp. 1–7, 2021, doi: 10.11857/j.issn.1674-5124.2021060197.
[6] G. Ge, Y. Zhang, W. Wang, Q. Jiang, L. Hu, and Y. Wang, “Text-MCL: Autonomous mobile robot localization in similar
environment using text-level semantic information,” Machines, vol. 10, no. 3, p. 169, Feb. 2022, doi: 10.3390/machines10030169.
[7] C.-L. Zhou, J.-D. Chen, and F. Huang, “WiFi-PDR fusion indoor positioning technology based on unscented particle filter,”
Computer Science, vol. 49, no. 6A, pp. 606–611, 2022, doi: 10.11896/jsjkx.210700108.
[8] L. Jiang, C. Xiang, J.-Y. Zhu, and Q. Liu, “Particle filter relocation with semantic likelihood estimation,” Acta Electronica Sinica,
vol. 49, no. 2, pp. 306–314, 2021, doi: 10.12263/DZXB.20200396.
[9] Q. Ye, M. Zheng, and X. Qiu, “Implementation of an indoor autonomous navigation mobile robot system based on ROS,”
Transducer Microsyst. Technol., vol. 41, no. 2, pp. 90–93, 2022, doi: 10.13873/J.1000-9787(2022)02-0090-04.
[10] J. Shou, Z. Zhang, Y. Su, and Z. Zhong, “Design and implementation of an indoor mobile robot localization and navigation
system based on ROS and LiDAR,” Machine and Electronics, vol. 36, no. 11, pp. 76–80, 2018, doi: 10.3969/j.issn.1001-
2257.2018.11.018.
[11] Z. Hu, J. Liu, G. Huang, and Q. Tao, “Robot indoor localization fusing WiFi, LiDAR, and maps,” J. Electron. Inf. Technol., vol.
43, no. 8, pp. 2308–2316, 2021, doi: 10.11999/JEIT200671.
[12] Z. Wang, B. Yan, M. Dong, J. Wang, and P. Sun, “Wall-climbing robot localization method based on LiDAR and improved
AMCL,” Chin. J. Sci. Instrum., vol. 43, no. 12, pp. 220–227, 2022, doi: 10.19650/j.cnki.cjsi.J2210261.
[13] F. Lan, R. Liu, L. Guo, T. Deng, Z. Deng, and Y. Xiao, “Mobile robot localization fusing UWB azimuth and distance,” J.
Electron. Meas. Instrum., vol. 37, no. 8, pp. 155–163, 2023, doi: 10.13382/j.jemi.B2306261.
[14] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual--inertial odometry,”
IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, Feb. 2017, doi: 10.1109/TRO.2016.2597321.
[15] C. Chen and Y. Ding, “Laser radar image edge feature extraction method based on median filtering,” Laser J., vol. 44, no. 8, pp.
94–98, 2023, doi: 10.14016/j.cnki.jgzz.2023.08.094.
[16] Z. Gao, S. Wang, T. Miao, and A. Song, “Indoor laser SLAM system based on ground constraints and principal component
analysis feature extraction,” Manned Spacefl., vol. 30, no. 2, pp. 150–159, 2024, doi: 10.16329/j.cnki.zrht.2024.02.010.
[17] X. Xu, R. Yang, and N. Wang, “A robust circle detector with regionalized radius aid,” Pattern Recognition, vol. 149, p. 110256,
May 2024, doi: 10.1016/j.patcog.2024.110256.
[18] J. Xue, P. Wang, J. Zhou, and F. Cheng, “Accurate construction of orchard two-dimensional environmental map based on
improved Gmapping algorithm,” Transactions of the Chinese Society for Agricultural Machinery, vol. 54, no. 7, pp. 26–34, 2023,
doi: 10.6041/j.issn.1000-1298.2023.07.003.
[19] Y. Lyu, L. Hua, J. Wu, X. Liang, and C. Zhao, “Robust radar inertial odometry in dynamic 3D environments,” Drones, vol. 8, no.
5, p. 197, May 2024, doi: 10.3390/drones8050197.
[20] R. Zheng, G. Sun, and F. D. Li, “A fusion localization system for security robots based on millimeter wave radar and inertial
sensors,” Sensors, vol. 24, no. 23, p. 7551, Nov. 2024, doi: 10.3390/s24237551.
[21] X. Chi et al., “A laser data compensation algorithm based on indoor depth map enhancement,” Electronics, vol. 12, no. 12, p.
2716, Jun. 2023, doi: 10.3390/electronics12122716.
[22] N. Adurthi, “Scan matching-based particle filter for LIDAR-only localization,” Sensors, vol. 23, no. 8, p. 4010, Apr. 2023, doi:
10.3390/s23084010.
[23] H. Wang, Y. Yin, and Q. Jing, “Comparative analysis of 3D LiDAR scan-matching methods for state estimation of autonomous
surface vessel,” Journal of Marine Science and Engineering, vol. 11, no. 4, p. 840, Apr. 2023, doi: 10.3390/jmse11040840.
[24] K. Rajathi, N. Gomathi, M. Mahdal, and R. Guras, “Path segmentation from point cloud data for autonomous navigation,”
Applied Sciences, vol. 13, no. 6, p. 3977, Mar. 2023, doi: 10.3390/app13063977.
[25] Y. Li, Y. Wei, Y. Wang, Y. Lin, W. Shen, and W. Jiang, “False detections revising algorithm for millimeter wave radar SLAM in
tunnel,” Remote Sensing, vol. 15, no. 1, p. 277, Jan. 2023, doi: 10.3390/rs15010277.
BIOGRAPHIES OF AUTHORS
IAES Int J Rob & Autom, Vol. 14, No. 3, September 2025: 438-450