Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve
LIU Duo1, ZHANG Guoyin1, SHI Yiqi1, TIAN Ye2, ZHANG Liguo1
1. College of Computer Science and Technology, Harbin Engineering University, Harbin 150001; 2. Hangzhou Institute of Technology, Xidian University, Hangzhou 311231
Abstract To solve the problems of color distortion and the loss of thermal target details in infrared and visible image fusion, a method for zero-shot infrared and visible image fusion based on fusion curve(ZSFuCu) is proposed. The fusion task is transformed into an image-specific curve estimation process using a deep network. Texture enhancement and color feature preservation of thermal targets are achieved through pixel-level nonlinear mapping. A multi-dimensional visual perception loss function is designed to construct the constrain mechanism from three perspectives: contrast enhancement, color preservation and spatial continuity. The high-frequency information and color distribution of the fused image are collaboratively optimized with the retention of structural features and key information. The zero-shot training strategy is employed, and the adaptive optimization of parameters can be completed only using a single infrared and visible image pair, which shows strong robustness in fusion across various lighting conditions. Experiments demonstrate that ZSFuCu significantly improves target prominence, detail richness and color naturalness, validating its effectiveness and practicality.
Fund:National Key Research and Development Program of China(No.2021YFC3320302)
Corresponding Authors:ZHANG Liguo, Ph.D., professor. His research interests include deep learning, machine learning and computer vision.
About author:: LIU Duo, Ph.D. candidate. His research interests include image processing and image fusion. ZHANG Guoyin, Ph.D., professor. His research interests include deep learning and machine learning. SHI Yiqi, Ph.D. candidate. Her research interests include image processing and self-supervised learning. TIAN Ye, Ph.D. His research interests include image processing and intelligent adversarial techniques.
LIU Duo,ZHANG Guoyin,SHI Yiqi等. Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve[J]. Pattern Recognition and Artificial Intelligence, 2025, 38(3): 268-279.
[1] HAN M N, YU K L, QIU J H, et al. Boosting Target-Level Infrared and Visible Image Fusion with Regional Information Coordination. Information Fusion, 2023, 92: 268-288. [2] 洪雨露,吴小俊,徐天阳.基于差异双分支编码器的多阶段图像融合方法.模式识别与人工智能, 2022, 35(7): 661-670. (HONG Y L, WU X J, XU T Y.Multi-stage Image Fusion Method Based on Differential Dual-Branch Encoder. Pattern Recognition and Artificial Intelligence, 2022, 35(7): 661-670.) [3] CHEN J, LI X J, LUO L B, et al. Infrared and Visible Image Fusion Based on Target-Enhanced Multiscale Transform Decomposition. Information Sciences, 2020, 508: 64-78. [4] MA J L, ZHOU Z Q, WANG B, et al. Infrared and Visible Image Fusion Based on Visual Saliency Map and Weighted Least Square Optimization. Infrared Physics & Technology, 2017, 82: 8-17. [5] RAO Y J, WU D, HAN M N, et al. AT-GAN: A Generative Adversarial Network with Attention and Transition for Infrared and Visible Image Fusion. Information Fusion, 2023, 92: 336-349. [6] 徐少平,陈晓军,罗洁,等.基于相邻特征融合的红外与可见光图像自适应融合网络.模式识别与人工智能, 2022, 35(12): 1089-1100. (XU S P, CHEN X J, LUO J, et al. Adjacent Feature Combination Based Adaptive Fusion Network for Infrared and Visible Images. Pattern Recognition and Artificial Intelligence, 2022, 35(12): 1089-1100.) [7] LI H, WU X J, KITTLER J.MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion. IEEE Transactions on Image Processing, 2020, 29: 4733-4746. [8] YIN H T.Sparse Representation with Learned Multiscale Dictionary for Image Fusion. Neurocomputing, 2015, 148: 600-610. [9] BAVIRISETTI D P, DHULI R.Two-Scale Image Fusion of Visible and Infrared Images Using Saliency Detection. Infrared Physics & Technology, 2016, 76: 52-64. [10] TANG L F, ZHANG H, XU H, et al. Rethinking the Necessity of Image Fusion in High-Level Vision Tasks: A Practical Infrared and Visible Image Fusion Network Based on Progressive Semantic Injection and Scene Fidelity. Information Fusion, 2023, 99. DOI: 10.1016/j.inffus.2023.101870. [11] LI H, WU X J.CrossFuse: A Novel Cross Attention Mechanism Based Infrared and Visible Image Fusion Approach. Information Fusion, 2024, 103. DOI: 10.1016/j.inffus.2023.102147. [12] ZHANG X C, DEMIRIS Y.Visible and Infrared Image Fusion Using Deep Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10535-10554. [13] ZHANG H, YUAN J T, TIAN X, et al. GAN-FM: Infrared and Visible Image Fusion Using GAN with Full-Scale Skip Connection and Dual Markovian Discriminators. IEEE Transactions on Computational Imaging, 2021, 7: 1134-1147. [14] ZHAO Z X, XU S, ZHANG C X, et al. DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion // Proc of the 29th International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI, 2020: 970-976. [15] LIU J Y, FAN X, JIANG J, et al. Learning a Deep Multi-scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(1): 105-119. [16] WANG D, LIU J Y, FAN X, et al. Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration // Proc of the 31st International Joint Confe-rence on Artificial Intelligence. San Francisco, USA: IJCAI, 2022: 3508-3515. [17] LI H, MA H L, CHENG C Y, et al. Conti-Fuse: A Novel Conti-nuous Decomposition-Based Fusion Framework for Infrared and Visible Images. Information Fusion, 2025, 117. DOI: 10.1016/j.inffus.2024.102839. [18] ZHOU M, ZHENG N S, HE X H, et al. Probing Synergistic High-order Interaction for Multi-modal Image Fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(2): 840-857. [19] LIU J Y, FAN X, HUANG Z B, et al. Target-Aware Dual Adversarial Learning and a Multi-scenario Multi-modality Benchmark to Fuse Infrared and Visible for Object Detection // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2022: 5792-5801. [20] ZHANG X C, ZHAI H, LIU J X, et al. Real-Time Infrared and Visible Image Fusion Network Using Adaptive Pixel Weighting Strategy. Information Fusion, 2023, 99. DOI: 10.1016/j.inffus.2023.101863. [21] WANG Z S, SHAO W Y, CHEN Y L, et al. A Cross-Scale Iterative Attentional Adversarial Fusion Network for Infrared and Visible Images. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(8): 3677-3688. [22] XIE Y N, LIU G, XU R, et al. R2F-UGCGAN: A Regional Fusion Factor-Based Union Gradient and Contrast Generative Adversarial Network for Infrared and Visible Image Fusion. Journal of Modern Optics, 2023, 70(1): 52-68. [23] ZHAO W D, XIE S G, ZHAO F, et al. MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding from Object Detection // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2023: 13955-13965. [24] XU H, MA J Y, JIANG J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. [25] TANG L F, YUAN J T, ZHANG H, et al. PIAFusion: A Progre-ssive Infrared PIAFusion: A Progre-ssive Infrared and Visible Image Fusion Network Based on Illumination Aware. Information Fusion, 2022, 83/84: 79-92. [26] TANG L F, XIANG X Y, ZHANG H, et al. DIVFusion: Darkness-Free Infrared and Visible Image Fusion. Information Fusion, 2023, 91: 477-493. [27] SHI Y Q, LIU D, ZHANG L G, et al. ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images // Proc of the IEEE/CVF Conference on Compu-ter Vision and Pattern Recognition. Washington, USA: IEEE, 2024: 3015-3024. [28] LIU D, ZHANG G Y, SHI Y Q, et al. Efficient Feature Difference-Based Infrared and Visible Image Fusion for Low-Light Environments. The Visual Computer, 2025, 1(2): 27-43. [29] LI C Y, GUO C L, LOY C C.Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(8): 4225-4238. [30] KINGMA D P, BA J.Adam: A Method for Stochastic Optimization[C/OL]. [2024-11-22]. https://arxiv.org/pdf/1412.6980. [31] JIA X Y, ZHU C, LI M Z, et al. LLVIP: A Visible-Infrared Paired Dataset for Low-Light Vision // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2021: 3489-3497. [32] CUI G M, FENG H J, XU Z H, et al. Detail Preserved Fusion of Visible and Infrared Images Using Regional Saliency Extraction and Multi-scale Image Decomposition. Optics Communications, 2015, 341: 199-209. [33] DESHMUKH M, BHOSALE U.Image Fusion and Image Quality Assessment of Fused Images. International Journal of Image Processing, 2010, 4(5): 484-508. [34] ROBERTS J W, VAN AARDT J A, AHMED F B. Assessment of Image Fusion Procedures Using Entropy, Image Quality, and Multispectral Classification. Journal of Applied Remote Sensing, 2008, 2(1). DOI: 10.1117/1.2945910. [35] RAO Y J.In-Fibre Bragg Grating Sensors. Measurement Science and Technology, 1997, 8(4): 355-375. [36] HAN Y, CAI Y Z, CAO Y, et al. A New Image Fusion Perfor-mance Metric Based on Visual Information Fidelity. Information Fusion, 2013, 14(2): 127-135.