[1] MUR-ARTAL R,MONTIEL J M M,TARDÓS J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163. [2] MUR-ARTAL R,TARDÓS J D.ORB-SLAM2:an open-source SLAM system for monocular,stereo,and RGB-D cameras[J].IEEE Transactions on Robotics,2017,33(5):1255-1262. [3] CAMPOS C,ELVIRA R,RODRÍGUEZ J J G,et al.ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J].IEEE Transactions on Robotics,2021,37(6):1874-1890. [4] CURLESS B,LEVOY M.A volumetric method for building complex models from range images[C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques.[S.l.]:ACM Press,1996:303-312. [5] WHELAN T,KAESS M,JOHANNSSON H,et al.Real-time large-scale dense RGB-D SLAM with volumetric fusion[J].International Journal of Robotics Research,2015,34(4/5):598-626. [6] WHELAN T,LEUTENEGGER S,SALAS MORENO R,et al.ElasticFusion:dense SLAM without a pose graph[C]//Proceedings of 2015 Robotics:Science and Systems XI.[S.l.]:Science and Systems Foundation,2015. [7] MILDENHALL B,SRINIVASAN P P,TANCIK M,et al.NeRF:representing scenes as neural radiance fields for view synthesis[C]//Proceedings of European conference on computer vision.Cham:Springer,2020:405-421. [8] SUCAR E,LIU Shikun,ORTIZ J,et al.iMAP:implicit mapping and positioning in real-time[C]//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV).Montreal:IEEE,2022:6209-6218. [9] WANG Hengyi,WANG Jingwen,AGAPITO L.Co-SLAM:joint coordinate and sparse parametric encodings for neural real-time SLAM[C]//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Vancouver:IEEE,2023:13293-13302. [10] ZHU Zihan,PENG Songyou,LARSSON V,et al.NICE-SLAM:neural implicit scalable encoding for SLAM[C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).New Orleans:IEEE,2022:12776-12786. [11] JOHARI M M,CARTA C,FLEURET F.ESLAM:efficient dense SLAM system based on hybrid representation of signed distance fields[C]//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Vancouver:IEEE,2023:17408-17419. [12] YANG Xingrui,LI Hai,ZHAI Hongjia,et al.Vox-fusion:dense tracking and mapping with voxel-based neural implicit representation[C]//Proceedings of 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).Singapore:IEEE,2022:499-507. [13] SANDSTRÖM E,LI Yue,VAN GOOL L,et al.Point-SLAM:dense neural point cloud-based SLAM[C]//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision (ICCV).Paris:IEEE,2024:18387-18398. [14] KERBL B,KOPANAS G,LEIMKUEHLER T,et al.3D Gaussian splatting for real-time radiance field rendering[J].ACM Transactions on Graphics,2023,42(4):1-14. [15] YESHWANTH C,LIU Y C,NIEßNER M,et al.ScanNet++:a high-fidelity dataset of 3D indoor scenes[C]//Proceedings of 2023 IEEE/CVF International Conference on Computer Vision (ICCV).Paris:IEEE,2024:12-22. [16] STRAUB J,WHELAN T,MA Lingni,et al.The replica dataset:a digital replica of indoor spaces[EB/OL].[2024-11-02].https://arxiv.org/abs/1906.05797. [17] STURM J,ENGELHARD N,ENDRES F,et al.A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.Vilamoura-Algarve:IEEE,2012:573-580. |