测绘通报 ›› 2024, Vol. 0 ›› Issue (9): 87-95.doi: 10.13474/j.cnki.11-2246.2024.0916

• 学术研究 • 上一篇    下一篇

复杂场景下基于无人平台的视觉/惯性/超宽带数据集

罗豪龙1,2, 杨子迪1, 李雪强1,3, 邹丹平2, 李建胜1, 李广云1   

  1. 1. 信息工程大学地理空间信息学院, 河南 郑州 450001;
    2. 上海交通大学上海市北斗导航与位置服务重点实验室, 上海 200240;
    3. 61618部队, 北京 100080
  • 收稿日期:2024-07-25 发布日期:2024-10-09
  • 通讯作者: 李广云。E-mail:guangyun_li_chxy@163.com
  • 作者简介:罗豪龙(1996—),男,博士生,主要研究方向为多源传感器融合导航和SLAM。E-mail:haolong_luo@163.com
  • 基金资助:
    国家自然科学基金(42071454);地理信息工程国家重点实验室基金(SKLGIE2023-M-2-2)

Visual/inertial/ultra-wideband dataset based on unmanned platform in complex scenes

LUO Haolong1,2, YANG Zidi1, LI Xueqiang1,3, ZOU Danping2, LI Jiansheng1, LI Guangyun1   

  1. 1. School of Geospatial Information, Information Engineering University, Zhengzhou 450001, China;
    2. Shanghai Key Laboratory of Navigation and Location-based Services, Shanghai Jiao Tong University, Shanghai 200240, China;
    3. Troops 61618, Beijing 100080, China
  • Received:2024-07-25 Published:2024-10-09

摘要: 基于多源传感器融合的导航和SLAM技术是当前发展的主流方向,其在复杂场景下的研究和应用日益受到广泛关注。然而,面向复杂场景下的多源传感器数据集相对较少,特别是包含超宽带(UWB)传感器的数据集更是稀缺。为了方便用户在复杂场景下进行多源传感器融合算法的测试验证,深入挖掘多源传感器融合和SLAM技术的不足、潜在发展方向,本文首先依托无人车和无人机平台,在动态场景、非视距场景、大尺度场景等7种复杂场景下实现了视觉/惯性/超宽带的数据集采集;然后,通过高精度的光学运动捕捉系统为数据集提供实时的六自由度真实位置和姿态;最后,利用当前先进的VINS-MONO、VINS-FUSION、VIR-SLAM和ORB-SLAM3 4种开源算法,对所有场景序列进行了试验验证和分析。试验结果表明,所有场景序列的数据均有效可用。

关键词: 多源传感器融合, 数据集, 复杂场景, 超宽带, 无人平台

Abstract: The navigation and SLAM technologies based on multi-sensor fusion are currently the mainstream direction of development, and their research and applications in complex scenes have increasingly attracted widespread attention. Nevertheless, there exists a relative dearth of multi-sensor datasets specifically designed for complex scenes, particularly those that incorporate ultra-wide band (UWB) sensors. To facilitate users to test and verify multi-source sensor fusion algorithms in complex scenes and explore the shortcomings and potential development directions of multi-source sensor fusion and SLAM technology,both unmanned vehicles and drones are utilized to gather visual, inertial, and UWB data in seven diverse scenes, including dynamic scenes, non-line-of-sight scenes, and large-scale scenes. Furthermore, a high-precision optical motion capture system is employed to provide real-time six-degree-of-freedom true positions and orientations for the dataset. Finally, four state-of-the-art open-source algorithms, namely VINS-MONO, VINS-FUSION, VIR-SLAM and ORB-SLAM3, are utilized to conduct experimental verification and analysis on all scene sequences. The experimental results demonstrate that the data from all scene sequences are effective and usable.

Key words: multi-sensor fusion, dataset, complex scenes, UWB, unmanned platform

中图分类号: