[1]DURRANT-WHYTE H, BAILEY T.Simultaneous localization and mapping: part I[J].IEEE Robotics & Automation Magazine, 2006, 13(2):99-110. [2]KHAN M U, ALI ZAIDI S A, ISHTIAQ A, et al.A comparative survey of LiDAR-SLAM and LiDAR based sensor technologies[C]//Proceedings of 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC).Karachi:IEEE, 2021:1-8. [3]HUANG, L.Review on LiDAR-based SLAM techniques[C]//Proceedings of 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML).[S.l]:IEEE, 2021:163-168. [4]王晨捷, 罗斌, 李成源, 等.无人机视觉SLAM协同建图与导航[J].测绘学报, 2020, 49(6):767-776. [5]TAKETOMI T, UCHIYAMA H, IKEDA S.Visual SLAM algorithms:a survey from 2010 to 2016[J].IPSJ Transactions on Computer Vision and Applications, 2017, 9(1):16. [6]ZHANG J, SINGH S.LOAM:LiDAR odometry and mapping in real-time[J].Robotics:Science and Systems Foundation, 2014, 2(9):1-9. [7]SHAN T X, ENGLOT B.LeGO-LOAM:lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).Madrid:IEEE, 2018:4758-4765. [8]SHAN T X, ENGLOT B, MEYERS D, et al.LIO-SAM:tightly-coupled LiDAR inertial odometry via smoothing and mapping[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).Las Vegas, NV:IEEE, 2020:5135-5142. [9]XU W, ZHANG F.FAST-LIO:a fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[EB/OL].[2024-12-20].https://arxiv.org/abs/2010.08196v3. [10] XU W, CAI Y X, HE D J, et al.FAST-LIO2:fast direct LiDAR-inertial odometry[J].IEEE Transactions on Robotics, 2022, 38(4):2053-2073. [11] QIN T, LI P L, SHEN S J.VINS-mono:a robust and versatile monocular visual-inertial state estimator[J].IEEE Transactions on Robotics, 2018, 34(4):1004-1020. [12] MUR-ARTAL R, TARDOS J D.ORB-SLAM2:an open-source SLAM system for monocular, stereo, and RGB-D cameras[J].IEEE Transactions on Robotics, 2017, 33(5):1255-1262. [13] XU K, HAO Y F, YUAN S H, et al.AirSLAM:an efficient and illumination-robust point-line visual SLAM system[EB/OL].[2024-12-20].https://arxiv.org/abs/2408.03520v4. [14] LI R H, WANG S, LONG Z Q, et al.UnDeepVO:monocular visual odometry through unsupervised deep learning[EB/OL].[2024-12-20].https://arxiv.org/abs/1709.06841v2. [15] ZHENG C, ZHU Q, XU W, et al.Fast-livo:Fast and tightly-coupled sparse-direct lidar-inertial-visual odometry[C]//Proceedings of 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).[S.l]:IEEE, 2022:4003-4009. [16] BAILEY T, NIETO J, GUIVANT J, et al.Consistency of the EKF-SLAM algorithm[C]//Proceedings of 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.Beijing:IEEE, 2006:3562-3568. [17] 于宁波, 王石荣.利用双RGB-D传感器融合增强对未知环境的自主探索和地图构建[J].工程(英文), 2019, 5(1):355-373. [18] XUE J, WANG D, DU S, CUI D, et al.Visual-dominant multi-sensor fusion method for autonomous localization and obstacle perception in autonomous vehicles[J].Frontiers of Information Technology & Electronic Engineering, 2017, 1:122-139. [19] 文成林, 郭超, 高敬礼.多传感器多尺度图像信息融合算法[J].电子学报, 2008, 36(5):840-847. [20] 高强, 陆科帆, 吉月辉, 等.多传感器融合SLAM研究综述[J].现代雷达, 2024, 46(8):29-39. [21] RONNEBERGER O, FISCHER P, BROX T.U-Net:convolutional networks for biomedical image segmentation[M]//Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015.Cham:Springer International Publishing, 2015:234-241. [22] LIN Y C, WANG S, JIANG Y L, et al.Breaking of brightness consistency in optical flow with a lightweight CNN network[J].IEEE Robotics and Automation Letters, 2024, 9(8):6840-6847. [23] NGUYEN T M, YUAN S H, CAO M Q, et al.NTU VIRAL:a visual-inertial-ranging-LiDAR dataset, from an aerial vehicle viewpoint[J].The International Journal of Robotics Research, 2022, 41(3):270-280. [24] LIN J R, ZHANG F.R3 LIVE:a robust, real-time, RGB-colored, LiDAR-inertial-visual tightly-coupled state estimation and mapping package[C]//Proceedings of 2022 International Conference on Robotics and Automation (ICRA).Philadelphia, PA:IEEE, 2022:10672-10678. |