测绘通报 ›› 2026, Vol. 0 ›› Issue (1): 151-155,171.doi: 10.13474/j.cnki.11-2246.2026.0124

• 技术交流 • 上一篇    下一篇

基于改进PSPNet网络的林窗信息提取方法

刘丹莹1,2, 夏既胜2   

  1. 1. 新疆疆海测绘科技有限公司, 新疆 乌鲁木齐 830000;
    2. 云南大学地球科学学院, 云南 昆明 650500
  • 收稿日期:2025-04-07 发布日期:2026-02-03
  • 通讯作者: 夏既胜。E-mail:xiajsh@ynu.deu.cn
  • 作者简介:刘丹莹(1995—),女,硕士生,主要研究方向为基于深度学习的遥感图像目标检测及识别。E-mail:909178480@qq.com
  • 基金资助:
    国家自然科学基金(42061038)

Canopy gap information extraction method based on improved PSPNet network

LIU Danying1,2, XIA Jisheng2   

  1. 1. Xinjiang Jianghai Surveying and Mapping Technology Company, Urumqi 830000, China;
    2. School of Earth Sciences, Yunnan University, Kunming 650500, China
  • Received:2025-04-07 Published:2026-02-03

摘要: 掌握林窗空间分布状况,对于森林生态系统的保护和维持具有重要意义。基于高分二号遥感影像的林窗信息提取任务中,考虑到林窗在森林系统中分布的广泛性和复杂性,传统的遥感解译方法存在识别效率不高且易发生错分、漏分等问题,本文提出了一种基于改进PSPNet网络的林窗信息提取模型。该模型替换主干网络使得模型轻量化,加入CBAM注意力机制并改进损失函数,有助于对林窗信息的学习,解决了正负样本数量不平衡导致的林窗边缘细节信息识别不准确问题。改进后的PSPNet模型比原始PSPNet模型平均交并比提高了3.12个百分点,平均像素精度提高了3.6个百分点,检测速度也在原本的基础上提高了65.43%,证明了该方法对于林窗信息识别的有效性。

关键词: 面向对象分类, 隶属度函数, 深度学习, 语义分割, 林窗

Abstract: Understanding the spatial distribution of canopy gaps holds significant importance for the conservation and sustenance of forest ecosystems.In the task of extracting canopy gap information based on GF-2 remote sensing imagery,given the extensive and complex distribution of cannopy gaps within forest systems,traditional remote sensing interpretation methods are inefficient and prone to misclassification and omission.Therefore,an improved canopy gap information extraction model based on PSPNet is proposed.This model replaces the backbone network to lighten its load,incorporates the CBAM attention mechanism,and refines the loss function,thereby enhancing the model's ability to learn forest gap information and addressing the issue of inaccurate identification of canopy gap edge details caused by an imbalance between positive and negative samples.Compared to the original PSPNet model,the improved PSPNet model exhibits an increase in the average intersection over union (IoU) by 3.12 percentage point,an improvement in average pixel accuracy by 3.6 percentage point,and a 65.43% increase in detection speed.This demonstrates the effectiveness of the proposed method for canopy gap information recognition.

Key words: object-oriented classification, membership function, deep learning, semantic segmentation, canopy gap

中图分类号: