测绘通报 ›› 2023, Vol. 0 ›› Issue (10): 61-66.doi: 10.13474/j.cnki.11-2246.2023.0296

• 学术研究 • 上一篇    下一篇

面向实景三维模型和实时视频图像融合的纹理映射

杨松1,2, 陈崇成1,2   

  1. 1. 福州大学空间数据挖掘与信息共享教育部重点实验室, 福建 福州 350108;
    2. 福州大学地理空间 信息技术国家地方联合工程研究中心, 福建 福州 350108
  • 收稿日期:2022-12-29 发布日期:2023-10-28
  • 作者简介:杨松(1998-),男,硕士,主要从事虚拟地理环境研究。E-mail:ysbestcome@foxmail.com
  • 基金资助:
    福建省科技创新领军人才项目

Texture mapping for real-scene 3D model and real-time video image fusion

YANG Song1,2, CHEN Chongcheng1,2   

  1. 1. Key Lab of Spatial Data Mining and Information Sharing of Ministry of Education, Fuzhou University, Fuzhou 350108, China;
    2. National Engineering Research Center of Geospatial Information Technology, Fuzhou University, Fuzhou 350108, China
  • Received:2022-12-29 Published:2023-10-28

摘要: 针对实时视频与实景三维模型融合时视频纹理存在割裂感、融合边界差异感明显、视频流实时性差等问题,本文提出了一种改进的实景三维模型和实时视频图像融合方法。首先利用倾斜摄影测量技术构建实景三维模型,搭建流媒体实现视频图像向三维场景的实时传输,匹配视频像素与实景模型片元顶点的对应关系;然后引入顾及场景深度的展平映射策略,以消除视频画面融入模型时的纹理割裂感;最后基于距离掩膜使融合接缝处过渡更加平滑。试验结果表明,本文所提映射策略融合效果优异,可消除融合画面的割裂现象和融合边界的突兀感,且在融合多路视频情形下场景帧率保持在50帧/s以上,系统承载力良好,实时性效果好,成果能应用于构建实时虚实融合的数字孪生场景。

关键词: 虚实融合, 视频纹理, 增强虚拟环境, 透视投影, 数字孪生

Abstract: Aiming at the problems of fragmentation of video texture, obvious fusion boundary difference and poor real-time performance of video stream when merging real-time video and real-time 3D model, an improved real-time 3D model and real-time video image fusion method is proposed. Firstly, we use oblique photogrammetry technology to construct a real-scene 3D model, build streaming media to realize real-time transmission of video images to 3D scenes, match the corresponding relationship between video pixels and real-scene model fragment vertices. Then introduce a flattening mapping strategy that takes into account the depth of the scene to eliminate texture splitting when the video is integrated into the model. Finally the transition at the fusion seam is smoother based on the distance mask. The experimental results show that the mapping strategy proposed in this paper has an excellent fusion effect, which can eliminate the fragmentation phenomenon of the fusion screen and the abrupt feeling of the fusion boundary, and the frame rate of the scene is above 50 frame/s in the case of fusion of multi-channel video. The system has a good carrying capacity and real-time performance is good, and the results can be applied to construct a digital twin scene of real-time fusion of virtual and real.

Key words: virtual and real fusion, video texture, augmented virtual environment, perspective projection, digital twins

中图分类号: