[1] 李群明, 熊蓉, 褚健. 室内自主移动机器人定位方法研究综述[J]. 机器人,2003, 25(6):560-567. [2] 胡劲草.室内自主式移动机器人定位方法研究[J].传动技术,2006(4):14-18,46. [3] 刘浩敏,章国锋,鲍虎军.基于单目视觉的同时定位与地图构建方法综述[J].计算机辅助设计与图形学学报,2016,28(6):855-868. [4] 白云汉. 基于SLAM算法和深度神经网络的语义地图构建研究[J]. 计算机应用与软件, 2018,35(1):183-190. [5] CADENA C, CARLONE L, CARRILLO H, et al. Past, present, and future of simultaneous localization and mapping:toward the robust-perception age[J]. IEEE Transactions on Robotics, 2016, 32(6):1309-1332. [6] SHEN S, MICHAEL N, KUMAR V. Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle:IEEE, 2015:5303-5310. [7] YOUSIF K, BAB-HADIASHAR A, HOSEINNEZHAD R. An overview to visual odometry and visual SLAM:applications to mobile robotics[J]. Intelligent Industrial Systems, 2015, 1(4):289-311. [8] 王安娜. 室内移动机器人自定位方法的研究[D]. 天津:河北工业大学, 2015. [9] LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3):314-334. [10] MUR-ARTAL R, TARDóS J D. An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5):1255-1262. [11] QIN T, LI P, SHEN S. Vins-mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4):1004-1020. [12] FORSTER C, CARLONE L, DELLAERT F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2017, 33(1):1-21. [13] MUR-ARTAL R, TARDóS J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2):796-803. [14] MARTINELLI A. Closed-form solution of visual-inertial structure from motion[J]. International Journal of Computer Vision, 2014, 106(2):138-152. [15] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10):1157-1163. |