测绘通报 ›› 2025, Vol. 0 ›› Issue (9): 105-111.doi: 10.13474/j.cnki.11-2246.2025.0917

• 学术研究 • 上一篇    下一篇

基于光照不变特征提取和动态特征祛除的视觉SLAM算法

柯学良1, 肖玮1, 曲乃铸1, 何智杰1,2, 黄睿1   

  1. 1. 联勤保障部队工程大学, 重庆 401331;
    2. 31680部队, 四川 崇州 611233
  • 收稿日期:2025-03-14 发布日期:2025-09-29
  • 通讯作者: 肖玮。E-mail:wzry@163.com
  • 作者简介:柯学良(1997—),男,硕士生,主要研究方向为视觉SLAM。E-mail:kxl1997kxl@163.com
  • 基金资助:
    2023年重庆市教委科学技术研究重点项目(KJZD-K202312903);2024年重庆市研究生科研创新项目(CYS240831;CYS240832)

A visual SLAM algorithm based on illumination-robust feature extraction and dynamic feature removal

KE Xueliang1, XIAO Wei1, QU Naizhu1, HE Zhijie1,2, HUANG Rui1   

  1. 1. Joint Logistic Support Force Engineering University, Chongqing 401331, China;
    2. Troops 31680, Chongzhou 611233, China
  • Received:2025-03-14 Published:2025-09-29

摘要: 针对环境物体运动和光照变化导致视觉SLAM算法定位精度不高的问题,本文提出了一种适合动态变光环境的高精度视觉SLAM算法。该算法基于VINS Mono架构,首先,进行光照不变特征提取,即通过构造的ResNet网络,对输入图像进行特征提取,得到初始特征点坐标和光照不变特征图,随后利用光流法在光照不变特征图上对初始特征点进行光流估计,降低光照变化对特征跟踪的影响。然后,进行动态特征祛除,通过YOLOv8网络的像素级语义分割,将输入图像中的动态物体标记为掩膜,随后利用对极几何约束祛除掩膜范围内的动态特征,得到可稳定追踪的静态特征点,降低动态特征对算法定位精度的影响。最后,基于EuRoC、VIODE、Market数据集,将本文算法与VINS Mono算法进行对比试验,结果表明,本文算法的绝对轨迹误差在3组数据集上平均提升了55.09%,变光动态环境下得到了较好的定位精度。

关键词: 动态特征祛除, 光照不变特征提取, 光流法, VINS Mono

Abstract: To address the issue of low localization accuracy in visual SLAM algorithms caused by environmental moving objects and illumination changes,this paper proposes a high-precision visual SLAM algorithm suitable for dynamic and varying illumination environments.This algorithm is based on the VINS Mono architecture,firstly,it performs illumination-robust feature extraction by extracting features from input images through a ResNet network to obtain initial feature point coordinates and illumination-invariant feature maps; then,it performs optical flow estimation on these maps to reduce the impact of illumination on feature tracking.Subsequently,it carries out dynamic feature removal by using pixel-level semantic segmentation with YOLOv8 to mark dynamic objects in the input images as masks.Epipolar geometry constraints are then utilized to remove dynamic features within the mask regions,obtaining stable static feature points for tracking and reducing the impact of dynamic features on the algorithm's localization accuracy.Finally,comparative experiments on the EuRoC,VIODE,and Market datasets show that our method achieves 55.09%lower absolute trajectory error compared to VINS-Mono,demonstrating good localization accuracy in dynamic and varying illumination environments.

Key words: dynamic feature removal, illumination-robust feature extraction, optical flow method, VINS Mono

中图分类号: