[1] 石丽丽.京西石灰石采石场废弃地植被恢复效果及其评价研究[D].北京: 北京林业大学, 2014. [2] 宋百敏.北京西山废弃采石场生态恢复研究:自然恢复的过程、特征与机制[D].济南: 山东大学, 2008. [3] 张玉虎,于长青,宋百敏,等.快速监测评估废弃采石场生态恢复的研究[J].生态与农村环境学报,2007,23(3): 36-40. [4] 赵浩腾.基于Landsat长时间遥感影像的采石场面积监测与分析[D].北京: 中国科学院大学(中国科学院遥感与数字地球研究所),2018. [5] 尹红,杨广斌,安裕伦.贵阳市采石场遥感动态监测研究[J].环保科技,2007,13(1): 6-10. [6] 马得利,孙永康,杨建英,等.基于无人机遥感技术的废弃采石场立地条件类型划分[J].北京林业大学学报,2018,40(9): 90-97. [7] 王耿明,朱俊凤,陈捷,等.采石场绿色矿山建设无人机动态监测: 以广州市太珍石场为例[J].地矿测绘,2019,35(3): 29-30. [8] 马林飞,倪欢,周子涵.基于注意力机制的高分遥感图像采石场识别[J].测绘通报,2021(11): 96-100. [9] SHELHAMER E,LONG J,DARRELL T.Fully convolutional networks for semantic segmentation[C]// Proceedings of 2016 IEEE Transactions on Pattern Analysis and Machine Intelligence.[S.l.]: IEEE,2016: 640-651. [10] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM,2017,60(6): 84-90. [11] TRAORE B B,KAMSU-FOGUEM B,TANGARA F.Deep convolution neural network for image recognition[J].Ecological Informatics,2018,48: 257-268. [12] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al.Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).[S.l.]: IEEE,2016: 770-778. [13] CHEN L C,PAPANDREOU G,KOKKINOS I,et al.DeepLab: semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(4): 834-848. [14] ROMERA E,ÁLVAREZ J M,BERGASA L M,et al.ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation[J].IEEE Transactions on Intelligent Transportation Systems,2018,19(1): 263-272. [15] ZHAO Hengshuang,ZHANG Yi,LIU Shu,et al.PSANet: point-wise spatial attention network for scene parsing[M]//Computer Vision-ECCV 2018.Cham: Springer International Publishing,2018: 270-286. [16] DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.An image is worth 16×16 words: transformers for image recognition at scale[C]//Proceedings of 2021 International Conference on Learning Representations (ICLR).[S.l.]: Oral Paper,2021. [17] TOUVRON H,CORD M,DOUZE M,et al.Training data-efficient image transformers & distillation through attention[C]//Proceedings of 2021 International Conference on Machine Learning.[S.l.]: PMLR,2021. [18] LIU Ze,LIN Yutong,CAO Yue,et al.Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of 2022 IEEE/CVF International Conference on Computer Vision (ICCV).Montreal,QC,Canada: IEEE,2022: 9992-10002. [19] TOLSTIKHIN I,HOULSBY N,KOLESNIKOV A,et al.MLP-mixer: an all-MLP architecture for vision [J].Advances in Neural Information Processing Systems,2021,34: 24261-24272. [20] DING Xiaohan,CHENG Honghao,ZHANG Xiangyu,et al.Repmlpnet: Hierarchical vision mlp with re-parameterized locality[C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition.[S.l.]: IEEE,2022: 578-587. [21] DENG Jia,DONG Wei,SOCHER R,et al.ImageNet: a large-scale hierarchical image database[C]//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition.Miami,FL,USA:IEEE,2009: 248-255. [22] CHEN Shoufa,XIE Enze,GE Chongjian,et al.CycleMLP: A MLP-like Architecture for Dense Prediction[C]//Proceedings of the International Conference on Learning Representations (ICLR).[S.l.]: Oral Paper,2022. [23] ISLAM M A,JIA SEN,BRUCE N D B.How much position information do convolutional neural networks encode? [C]//Proceedings of the International Conference on Learning Representations (ICLR).[S.l.]: Blind Submission Paper,2020. [24] CHU Xiangxiang,TIAN Zhi,ZHANG Bo,et al.Twins: Revisiting the Design of Spatial Attention in Vision Transformers [J].Advances in Neural Information Processing Systems,2021,34: 9355-9366. [25] XIE E,WANG Wenhai,YU Zhiding,et al.SegFormer: Simple and efficient design for semantic segmentation with transformers[J].Advances in Neural Information Processing Systems,2021,34: 12077-12090. [26] DEVLIN J,LEE K,CHANG Mingwei,et al.BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of NAACL-HLT.[S.l.]: NAACL-HLT,2019: 4171-4186. [27] YANG TJ,CHEN YH,SZE V,et al.Designing energy-efficient convolutional neural networks using energy-aware pruning[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition.[S.l.]: IEEE,2017: 5687-5695. [28] PENG Chengli,ZHANG Kaining,MA Yong,et al.Cross fusion net: a fast semantic segmentation network for small-scale semantic information capturing in aerial scenes[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60: 1-13. [29] DONG Xiaoyi,BAO Jiamin,CHEN Dongdong,et al.CSWin transformer: a general vision transformer backbone with cross-shaped windows [C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.[S.l.]: IEEE,2022: 12124-12134. [30] MEHTA S,RASTEGARI M.MobileViT: light-weight,general-purpose,and mobile-friendly vision transformer [C]//Proceedings of the International Conference on Learning Representations (ICLR).[S.l.]: ICLR,2020. |