测绘通报 ›› 2020, Vol. 0 ›› Issue (12): 11-16.doi: 10.13474/j.cnki.11-2246.2020.0381

• 学术研究 • 上一篇    下一篇

一种移动视频与地理场景的融合方法

赵维淞1, 钱建国1, 汤圣君2, 王伟玺2, 李晓明2, 郭晗3,4   

  1. 1. 辽宁工程技术大学测绘与地理科学学院, 辽宁 阜新 123000;
    2. 深圳大学智慧城市研究院, 广东 深圳 518061;
    3. 国土资源部城市土地资源监测与仿真重点实验室, 广东 深圳 518061;
    4. 深圳市数字城市工程研究中心, 广东 深圳 518034
  • 收稿日期:2020-01-13 发布日期:2021-01-06
  • 作者简介:赵维淞(1995-),男,硕士生,研究方向为图像处理。E-mail:1078198343@qq.com
  • 基金资助:
    国家自然科学基金青年基金(41801392);深圳市科技创新委员会自由探索基金(JCTJ20180305125131482);自然资源部城市自然资源监测与仿真重点实验室开放基金(KF-2019-04-010)

A fusion method of mobile video and geographic scene

ZHAO Weisong1, QIAN Jianguo1, TANG Shengjun2, WANG Weixi2, LI Xiaoming2, GUO Han3,4   

  1. 1. School of Geomatics, Liaoning Technical University, Fuxin 123000, China;
    2. Research Institute for Smart Cities, Shenzhen University, Shenzhen 518061, China;
    3. Key Laboratory of Urban Land Resource Monitoring and Simulation, Land Resources Department, Shenzhen 518061, China;
    4. Shenzhen Digital City Engineering Research Center, Shenzhen 518034, China
  • Received:2020-01-13 Published:2021-01-06

摘要: 快速准确地了解灾害现场状况是救灾过程中的重中之重。通常发生灾害都会使用无人机进行现场勘察,但是无人机视频难以与实际的地理场景关联起来,为此本文提出了一种移动视频与地理场景的融合方法。该方法首先采用具有仿射不变性的ASIFT算法检测特征点,将匹配后的特征点采用RANSAC算法进行迭代剔除噪点,计算视频与地理场景最优的透视变换矩阵模型参数;然后将计算得到的透视变换参数应用到视频数据,恢复视频角点坐标;最后通过内插得出所有视频帧的角点坐标,实现视频与DOM的精确融合。试验结果表明,对视频数据匹配的间隔帧越短,其整体融合精度越高,通过本文方法进行视频与地理场景融合的误差标准差低于10 m。

关键词: ASIFT算法, 随机抽样一致性算法, 图像匹配, 透视变换, 图像融合

Abstract: Quick and accurate understanding of the disaster scene situation is the top priority in the disaster relief process. Usually, drones are used for site surveys when disasters occur, but it is difficult to associate the drone video with the actual geographic scene. To this end, this paper proposes a fusion method of mobile video and geographic scene. This method first uses ASift algorithm with affine invariance to detect feature points, and uses the RANSAC algorithm to iteratively remove noise points to calculate video and geography, optimal scene transformation matrix model parameters. Then the calculated perspective transformation parameters are then applied to the video data to restore the coordinates of the corners of the video. Finally, the corner coordinates of all video frames are obtained by interpolation to achieve an accurate fusion of video and DOM. The experimental results show that the shorter the interval frame that matches the video data, the higher the overall fusion accuracy, the standard deviation of the video and geographic scene fusion by this method is less than 10 m.

Key words: ASIFT algorithm, random sampling consistency algorithm, image matching, perspective transformation, image fusion

中图分类号: