|
|
Adjacent Feature Combination Based Adaptive Fusion Network for Infrared and Visible Images |
XU Shaoping1, CHEN Xiaojun1, LUO Jie2, CHENG Xiaohui1, XIAO Nan1 |
1. School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031; 2. Infectious Disease Hospital Affiliated to Nanchang University, Nanchang 330006 |
|
|
Abstract To obtain an infrared and visible fusion image with clear target edges and rich texture details,a fusion network model, adjacent feature combination based adaptive fusion network(AFCAFNet) is proposed based on the classical feed-forward denoising convolutional neural network(DnCNN) backbone network by improving the network architecture and the loss function of the model. The feature channels of several adjacent convolutional layers in the first half of DnCNN network are fully fused by adopting the strategy of expanding the number of channels, and the abilities of the model to extract and transmit feature information are consequently enhanced. All batch normalization layers in the network are removed to improve the computational efficiency, and the original rectified linear unit(ReLU) is replaced with the leaky ReLU to alleviate the gradient disappearance problem. To better handle the fusion of images with different scene contents, the gradient feature responses of infrared and visible images are extracted respectively based on the VGG16 image classification model. After normalization, they are regarded as the weight coefficients for the infrared image and visible image ,respectively. The weight coefficients are applied to three loss functions, namely mean square error, structural similarity and total variation. Experimental results on the benchmark databases show that AFCAFNet holds significant advantages in both subjective and objective evaluations. In addition, AFCAFNet achieves superior overall performance in subjective visual perception with clearer edges and richer texture details for specific targets and it is more in line with the characteristics of human visual perception.
|
Received: 20 September 2022
|
|
Fund:National Natural Science Foundation of China(No.62162043,61902168), Jiangxi Postgraduate Innovation Special Fund Project(No.YC2022-s033) |
Corresponding Authors:
XU Shaoping, Ph.D., professor. His research interests include graphics and image processing technology, machine vision and virtual surgical simulation.
|
About author:: CHEN Xiaojun, master student. His research interests include graphics and image processing technology.LUO Jie, bachelor. Her research interests include medical image processing technology.CHENG Xiaohui, master student. Her research interests include graphics and image processing technology.XIAO Nan, master student. His research interests include graphics and image proce-ssing technology. |
|
|
|
[1] SON D M, KWON H J, LEE S H. Visible and Near-Infrared Image Synthesis Using PCA Fusion of Multiscale Layers. Applied Sciences, 2020, 10(23). DOI: 10.3390/app10238702. [2] 张洲宇,曹云峰,丁萌,等.采用多层卷积稀疏表示的红外与可见光图像融合.哈尔滨工业大学学报, 2021, 53(12): 51-59. (ZHANG Z Y, CAO Y F, DING M, et al. Infrared and Visible Image Fusion via Multi-layer Convolutional Sparse Representation. Journal of Harbin Institute of Technology, 2021, 53(12): 51-59.) [3] 官铮,邓扬琳,聂仁灿.光谱重建约束非负矩阵分解的高光谱与全色图像融合.计算机科学, 2021, 48(9): 153-159. (GUAN Z, DENG Y L, NIE R C. Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion. Computer Science, 2021, 48(9): 153-159.) [4] JIN X, JIANG Q, YAO S W, et al. Infrared and Visual Image Fusion Method Based on Discrete Cosine Transform and Local Spatial Frequency in Discrete Stationary Wavelet Transform Domain. Infrared Physics and Technology, 2018, 88: 1-12. [5] 任亚飞,张娟梅.基于NSST多尺度熵的红外与可见光图像融合.兵器装备工程学报, 2022, 43(7): 278-285. (REN Y F, ZHANG J M. Infrared and Visible Images Fusion Based on NSST Multi-scale Entropy. Journal of Ordnance Equipment Engineering, 2022, 43(7): 278-285.) [6] MA J Y, LIANG P W, YU W, et al. Infrared and Visible Image Fusion via Detail Preserving Adversarial Learning. Information Fusion, 2020, 54: 85-98. [7] LI H, WU X J, KITTLER J. RFN-Nest: An End-to-End Residual Fusion Network for Infrared and Visible Images. Information Fusion, 2021, 73: 72-86. [8] LI H, WU X J. DenseFuse: A Fusion Approach to Infrared and Vi-sible Images. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. [9] TANG L F, YUAN J T, ZHANG H, ,et al. PIAFusion: A Progre-ssive Infrared and Visible Image Fusion Network Based on Illumination Aware. Information Fusion, 2022, 83/84: 79-92. [10] LI J, HUO H T, LI C, et al. AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks. IEEE Transactions on Multimedia, 2021, 23: 1383-1396. [11] ZHANG K, ZUO W M, CHEN Y J, et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Transactions on Image Processing, 2017, 26(7): 3142-3155. [12] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C/OL]. [2022-08-19]. https://arxiv.org/pdf/1409.1556.pdf. [13] MA J Y, YU W, LIANG P W, et al. FusionGAN: A Generative Adversarial Network for Infrared and Visible Image Fusion. Information Fusion, 2019, 48: 11-26. [14] MA J Y, TANG L F, XU M L, et al. STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection. IEEE Transactions on Instrumentation and Measurement, 2021, 70. DOI: 10.1109/TIM.2021.3075747. [15] ZHANG H, XU H, XIAO Y, et al. Rethinking the Image Fusion: A Fast Unified Image Fusion Network Based on Proportional Maintenance of Gradient and Intensity. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12797-12804. [16] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutio-nal Networks for Biomedical Image Segmentation // Proc of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin, Germany: Springer, 2015: 234-241. [17] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely Connected Convolutional Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 2261-2269. [18] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778. [19] MA J Y, CHEN C, LI C, et al. Infrared and Visible Image Fusion via Gradient Transfer and Total Variation Minimization. Information Fusion, 2016, 31: 100-109. [20] LIU Y, CHEN X, WARD R K, et al. Image Fusion with Convolutional Sparse Representation. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. [21] KUMAR B K S. Multifocus and Multispectral Image Fusion Based on Pixel Significance Using Discrete Cosine Harmonic Wavelet Transform. Signal, Image and Video Processing, 2013, 7(6): 1125-1143. [22] MA J Y, XU H, JIANG J J, et al. DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. [23] XU H, MA J Y, JIANG J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 502-518. [24] TOET A.TNO Image Fusion Dataset[DB/OL]. [2022-08-19].https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029. [25] KRISTAN M, LEONARDIS A, MATAS J,et al. The Eighth Visual Object Tracking VOT2020 Challenge Results // Proc of the European Conference on Computer Vision. Berlin, Germany: Sprin-ger, 2020: 547-601. [26] ROBERTS J W, VAN AARDT J A, AHMED F B. Assessment of Image Fusion Procedures Using Entropy, Image Quality, and Multispectral Classification. Journal of Applied Remote Sensing, 2008, 2(1). DOI: 10.1117/1.2945910. [27] RAO Y J. In-fibre Bragg Grating Sensors. Measurement Science and Technology, 1997, 8(4): 355-375. [28] QU G H, ZHANG D L, YAN P F. Information Measure for Performance of Image Fusion. Electronics Letters, 2002, 38(7): 313-315. [29] ASLANTAS V, BENDES E. A New Image Quality Metric for Image Fusion: The Sum of the Correlations of Differences. AEU-International Journal of Electronics and Communications, 2015, 69(12): 1890-1896. [30] MA K D, ZENG K, WANG Z. Perceptual Quality Assessment for Multi-exposure Image Fusion. IEEE Transactions on Image Processing, 2015, 24(11): 3345-3356. [31] SHEIKH H R, ROVIK A C. Image Information and Visual Quality. IEEE Transactions on Image Processing, 2006, 15(2): 430-444. [32] GU K, LIN W S, ZHAI G T, et al. No-reference Quality Metric of Contrast-Distorted Images Based on Information Maximization. IEEE Transactions on Cybernetics, 2017, 47(12): 4559-4565. |
|
|
|