| [1] KLEIN G,MURRAY D. Parallel tracking and mapping for small AR workspaces[C] //Proceedings of 2007 IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara:IEEE,2007:1-10.
[2] NEWCOMBE R A,LOVEGROVE S J,DAVISON A J. DTAM:dense tracking and mapping in real-time[C] //Proceedings of 2011 International Conference on Computer Vision. Barcelona:IEEE,2011:2320-2327.
[3] BLOESCH M,CZARNOWSKI J,CLARK R,et al. Codeslam—learning a compact,optimisable representationfor dense visual slam[C] //Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition. [S. l. ]:IEEE,2018:2560-2568.
[4] KOESTLER L,YANG N,ZELLER N,et al. Tandem:tracking and dense mapping in real-time using deep multi-view stereo[C] //Proceedings of 2022 IEEE Conference Computer Vision and Pattern Recognition. [S. l. ]:IEEE,2022:34-45.
[5] TEED Z,DENG J. Droid-slam:deep visual slam for monocular,stereo,and rgb-d cameras[J]. Advances in Neural Information Processing Systems,2021,34:16558-16569.
[6] BURRI M,NIKOLIC J,GOHL P,et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research,2016,35(10):1157-1163. [7] [WANG Wenshan,ZHU Delong,WANG Xiangwei,et al. TartanAir:a dataset to push the limits of visual SLAM[C] //Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas,NV:IEEE,2020:4909-4916.
[8] SUCAR E,LIU Shikun,ORTIZ J,et al. iMAP:implicit mapping and positioning in real-time[C] //Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal,QC:IEEE,2021:6209-6218.
[9] ZHU Zihan,PENG Songyou,LARSSON V,et al. NICE-SLAM:neural implicit scalable encoding for SLAM[C] //Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans,LA:IEEE,2022:12776-12786.
[10] TEED Z,DENG Jia. RAFT:recurrent all-pairs field transforms for optical flow[M] //Computer Vision - ECCV 2020. Cham:Springer International Publishing,2020:402-419.
[11] ILA V,POLOK L,SOLONY M,et al. Fast incremental bundle adjustment with covariance recovery[C] //Proceedings of 2017 International Conference on 3D Vision (3DV). Qingdao:IEEE,2017:175-184.
[12] RUSU R B,COUSINS S. 3D is here:point cloud library (PCL)[C] //Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai:IEEE,2011:1-4.
[13] CURLESS B,LEVOY M. A volumetric method for building complex models from range images[C] //Proceedings of the 23th Annual Conference on Computer Graphics and Interactive Techniques. [S. l. ]:ACM Press,1996:303-312.
[14] ZHOU Q Y,PARK J,KOLTUN V. Open3D:a modern library for 3D data processing[J]. Computer Vision and Pattern Recognition,2018:1801. 09847.
[15] ROSINOL A,ABATE M,CHANG Yun,et al. Kimera:an open-source library for real-time metric-semantic localization and mapping[C] //Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris:IEEE,2020:1689-1696.
[16] YAO Yao,LUO Zixin,LI Shiwei,et al. MVSNet:depth inference for unstructured multi-view stereo[C] //Proceedings of 2018 Computer Vision. Cham:Springer,2018:785-801. |