|
|
Multi-stage Image Fusion Method Based on Differential Dual-Branch Encoder |
HONG Yulu1, WU Xiaojun1, XU Tianyang1 |
1.Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computing Intelligence, School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122 |
|
|
Abstract In the existing infrared and visible image fusion methods, the details of the fused image are lost seriously and the visual effect is poor. Aiming at the problems, a multi-stage image fusion method based on differential dual-branch encoder is proposed. The features of multi-modal images are extracted by two encoders with different network structures to enhance the diversity of features. A multi-stage fusion strategy is designed to achieve refined image fusion. Firstly, primary fusion is performed on the differential features extracted by the two encoding branches in the differential dual-branch encoder. Then, mid-level fusion on the saliency features of the multi-modal images is conducted in the fusion stage. Finally, the long-range lateral connections are adopted to transmit shallow features of the differential dual-branch encoder implemented to the decoder and guide the fusion process and the image reconstruction simultaneously. Experimental results show the proposed method enhances the detailed information of the fused images and achieves better performance in both visual effect and objective evaluation.
|
Received: 27 April 2022
|
|
Fund:Supported by National Natural Science Foundation of China(No.62020106012,U1836218,61672265), The 111 Project of Ministry of Education of China(No.B12018) |
Corresponding Authors:
WU Xiaojun, Ph.D., professor. His research interests include artificial intelligence, pattern recognition and computer vision.
|
About author:: About Author:HONG Yulu, master student. Her research interests include image fusion and deep lear-ning.
XU Tianyang, Ph.D., associate professor. His research interests include artificial intelligence,pattern recognition and computer vision. |
|
|
|
[1] MA J Y, MA Y, LI C. Infrared and Visible Image Fusion Methods and Applications: A Survey. Information Fusion, 2019, 45: 153-178. [2] LAHMYED R, EL ANSARI M, ELLAHYANI A. A New Thermal Infrared and Visible Spectrum Images-Based Pedestrian Detection System. Multimedia Tools and Applications, 2019, 78(12): 15861-15885. [3] 赵迪,徐志胜.基于MRSVD红外热像融合的混凝土结构火灾损伤检测方法.信息与控制, 2017, 46(1): 19-24, 40. (ZHAO D, XU Z S. Detection of Fire Damage to Concrete Structures with Infrared Thermal Fusion Based on Multi-resolution Singular Value Decomposition. Information and Control, 2017, 46(1): 19-24, 40.) [4] 李盼盼,王朝立,孙占全.基于注意力机制的多特征融合人脸活体检测.信息与控制, 2021, 50(5): 631-640. (LI P P, WANG C L, SUN Z Q. Face Liveness Detection Based on Multi-feature Fusion with an Attention Mechanism. Information and Control, 2021, 50(5): 631-640.) [5] LI C L, LIANG X Y, LU Y J, et al. RGB-T Object Tracking: Benchmark and Baseline. Pattern Recognition, 2019, 96. DOI: 10.1016/j.patcog.2019.106977. [6] 汤张泳,吴小俊,朱学峰.多空间分辨率自适应特征融合的相关滤波目标跟踪算法.模式识别与人工智能, 2020, 33(1): 66-74. (TANG Z Y, WU X J, ZHU X F. Object Tracking with Multi-spatial Resolutions and Adaptive Feature Fusion Based on Correlation Filters. Pattern Recognition and Artificial Intelligence, 2020, 33(1): 66-74.) [7] 申晓霞,张桦,高赞,等.基于深度信息和RGB图像的行为识别算法.模式识别与人工智能, 2013, 26(8): 722-728. (SHEN X X, ZHANG H, GAO Z, et al. Behavior Recognition Algo-rithm Based on Depth Information and RGB Image. Pattern Re-cognition and Artificial Intelligence, 2013, 26(8): 722-728.) [8] BEN HAMZA A, HE Y, KRIM H, et al. A Multiscale Approach to Pixel-Level Image Fusion. Integrated Computer-Aided Engineering, 2005, 12(2): 135-146. [9] LIU C H, QI Y, DING W R. Infrared and Visible Image Fusion Method Based on Saliency Detection in Sparse Domain. Infrared Physics & Technology, 2017, 83: 94-102. [10] GAO R, VOROBYOV S A, ZHAO H. Image Fusion with Cosparse Analysis Operator. IEEE Signal Processing Letters, 2017, 24(7): 943-947. [11] LI H, WU X J, KITTLER J. Infrared and Visible Image Fusion Using a Deep Learning Framework // Proc of the 24th International Conference on Pattern Recognition. Washington, USA: IEEE, 2018: 2705-2710. [12] LI H, WU X J. DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Transactions on Image Processing, 2018, 28(5): 2614-2623. [13] LI H, WU X J, DURRANI T. NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656. [14] PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 4724-4732. [15] 肖儿良,林化溪,简献忠.基于生成对抗网络探索潜在空间的医学图像融合算法.信息与控制, 2021, 50(5): 538-549. (XIAO E L, LIN H X, JIAN X Z. Medical Image Fusion Algorithm Adopting Generative Adversarial Network to Explore Latent Space. Information and Control, 2021, 50(5): 538-549.) [16] MA J Y, YU W, LIANG P W, et al. FusionGAN: A Generative Adversarial Network for Infrared and Visible Image Fusion. Information Fusion, 2019, 48: 11-26. [17] MA J Y, LIANG P W, YU W, et al. Infrared and Visible Image Fusion via Detail Preserving Adversarial Learning. Information Fusion, 2020, 54: 85-98. [18] ZHANG Y, LIU Y, SUN P, et al. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network. Information Fusion, 2020, 54: 99-118. [19] ZHANG H, XU H, XIAO Y, et al. Rethinking the Image Fusion: A Fast Unified Image Fusion Network Based on Proportional Maintenance of Gradient and Intensity. Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020, 34(7): 12797-12804. [20] 程春阳,吴小俊,徐天阳.基于GhostNet的端到端红外和可见光图像融合方法.模式识别与人工智能, 2021, 34(11): 1028-1037. (CHENG C Y, WU X J, XU T Y. End-to-End Infrared and Visible Image Fusion Method Based on GhostNet. Pattern Recognition and Artificial Intelligence, 2021, 34(11): 1028-1037.) [21] XU H, MA J Y, JIANG J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 502-518. [22] WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale Structural Similarity for Image Quality Assessment // Proc of the 37th Asilomar Conference on Signals, Systems & Computers. Washington, USA: IEEE, 2003: 1398-1402. [23] KRISTAN M, LEONARDIS A, MATAS J, et al. The Eighth Visual Object Tracking VOT2020 Challenge Results // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 547-601. [24] ROBERTS J W, VAN AARDT J A, AHMED F B. Assessment of Image Fusion Procedures Using Entropy, Image Quality, and Multispectral Classification. Journal of Applied Remote Sensing, 2008, 2(1). DOI: 10.1117/1.2945910. [25] SHEIKH H R, BOVIK A C. Image Information and Visual Quality. IEEE Transactions on Image Processing, 2006, 15(2): 430-444. [26] QU G H, ZHANG D L, YAN P F. Information Measure for Performance of Image Fusion. Electronics Letters, 2002, 38(7): 313-315. [27] HAGHIGHAT M, RAZIAN M A. Fast-FMI: Non-reference Image Fusion Metric // Proc of the 8th IEEE International Conference on Application of Information and Communication Technologies. Wa-shington, USA: IEEE, 2014. DOI: 10.1109/ICAICT.2014.7036000. [28] XYDEAS C S, PETROVIC V. Objective Image Fusion Perfor-mance Measure. Electronics Letters, 2000, 36(4): 308-309. [29] HWANG S, PARK J, KIM N, et al. Multispectral Pedestrian Detection: Benchmark Dataset and Baseline // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2015: 1037-1045. |
|
|
|