Bulletin of Surveying and Mapping ›› 2025, Vol. 0 ›› Issue (11): 84-90,153.doi: 10.13474/j.cnki.11-2246.2025.1113

Previous Articles     Next Articles

Land cover classification in multi-modal remote sensing images using dual attention and multi-branch losses

YU Xiaowei, ZHENG Yadong, LIANG Li   

  1. Henan Remote Sensing Institute, Zhengzhou 450000, China
  • Received:2025-09-08 Published:2025-12-04

Abstract: Existing land cover classification methods still face numerous challenges in feature extraction and fusion quality.This paper proposes a multimodal image classification method named dual-attentive triple-branch fusionNet (DATF-Net),which integrate dual-attention collaboration and multi-branch joint loss.A cross-correlation feature enhancement strategy is adopted,and a channel-spatial attention collaboration mechanism is introduced to achieve comprehensive fusion of complementary multimodal features.The consistency of different branch decisions is ensured under the joint constraint of dice loss and cross-entropy loss.Ablation and comparative experiments are conducted on the Dongying multi-modal image dataset.The results demonstrate that the dual-attention collaboration mechanism and multi-branch joint loss function both contribute to improving land cover classification accuracy.Compared with other methods,DATF-Net achieves optimal performance in precision across various land cover categories and multiple overall classification metrics.Notably,The OA and FWIoU of DATF-Net outperform the second-best method (VFesuNet) by 7.9% and 12.66% respectively.The proposed method effectively mitigate the speckle noise interference in SAR images,enhance the coherence of classification boundaries,and improve classification accuracy and robustness in complex urban scenarios.

Key words: multi-modal remote sensing imagery, land cover classification, convolutional neural network, attention mechanism, loss function

CLC Number: