Combining Visual Saliency and Attention Mechanism for Low-Light Image Enhancement
SHANG Xiaoke1,2, AN Nan2, SHANG Jingjie3, ZHANG Shaomin1, DING Nai1,4
1.College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027; 2.School of Software Technology, Dalian University of Technology, Dalian 116622; 3.School of Software and Microelectronics, Peking University, Beijing 102600; 4.Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Laboratory, Hangzhou 311121
Abstract:Low-light image enhancement is the foundation and core step for solving various visual analysis tasks in low-light environments. However, the existing mainstream methods generally fail to characterize structural information effectively, resulting in some problems, such as unbalanced exposure and color distortion. Therefore, a low-light image enhancement network combining visual saliency and attention mechanism is proposed in this paper. A low-light image enhancement framework based on attention mechanism is firstly constructed by introducing attention mechanism with consideration of both local details and global information to characterize the color information in the enhancement results correctly. To achieve refined construction, a progressive process is designed to refine the enhancement process in stages following the concept of gradual optimization from coarse to fine. The feature fusion module guided by visual saliency is introduced to enhance the ability of the network to perceive salient objects in images and improve the expression of structural information from a perspective of being more in line with visual cognitive needs. Thus, noise/artifacts and other problems are avoided effectively. Experimentsshow that the proposed method solves the problems of unbalanced exposure and color distortion effectively with superior performance.
[1] WANG W J, YANG W H, LIU J Y. HLA-Face: Joint High-Low Adaptation for Low Light Face Detection // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2021: 16190-16199. [2] CHEN Z D, ZHONG B N, LI G R, et al. Siamese Box Adaptive Network for Visual Tracking // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 6667-6676. [3] LAND E H, MCCANN J J. Lightness and Retinex Theory. Journal of the Optical Society of America, 1971, 61(1): 1-11. [4] JOBSON D J, RAHMAN Z, WOODELL G A. Properties and Performance of a Center/Surround Retinex. IEEE Transactions on Image Processing, 1997, 6(3): 451-462. [5] RAHMAN Z, JOBSON D J, WOODELL G A. Multi-scale Retinex for Color Image Enhancement // Proc of the 3rd IEEE International Conference on Image Processing. Washington, USA: IEEE, 1996: 1003-1006. [6] JOBSON D J, RAHMAN Z, WOODELL G A. A Multiscale Retinex for Bridging the Gap between Color Images and the Human Observa-tion of Scenes. IEEE Transactions on Image Processing, 1997, 6(7): 965-976. [7] GUO X J, LI Y, LING H B. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Transactions on Image Processing, 2017, 26(2): 982-993. [8] ZHANG Q, NIE Y W, ZHU L, et al. Enhancing Underexposed Photos Using Perceptually Bidirectional Similarity. IEEE Transactions on Multimedia, 2021, 23: 189-202. [9] LI M D, LIU J Y, YANG W H, et al. Structure-Revealing Low-Light Image Enhancement via Robust Retinex Model. IEEE Transactions on Image Processing, 2018, 27(6): 2828-2841. [10] CHEN W, WANG W J, YANG W H, et al. Deep Retinex Decomposition for Low-Light Enhancement[C/OL].[2022-01-01]. https://arxiv.org/pdf/1808.04560.pdf. [11] WANG R X, ZHANG Q, FU C W, et al. Underexposed Photo Enhancement Using Deep Illumination Estimation // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 6842-6850. [12] BYCHKOVSKY V, PARIS S, CHAN E, et al. Learning Photographic Global Tonal Adjustment with a Database of Input/Output Image Pairs // Proc of the IEEE International Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2011: 97-104. [13] LÜ F F, LU F, WU J H, et al. MBLLEN: Low-Light Image/Video Enhancement Using CNNs // Proc of the 29th British Machine Vision Conference. Guildford, UK: BMVA Press, 2018: 220-233. [14] JIANG Y F, GONG X Y, LIU D, et al. EnlightenGAN: Deep Light Enhancement without Paired Supervision. IEEE Transactions on Image Processing, 2021, 30: 2340-2349. [15] YANG W H, WANG S Q, FANG Y M, et al. From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 3060-3069. [16] GUO C L, LI C Y, GUO J C, et al. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 1777-1786. [17] ZHANG C, YAN Q S, ZHU Y, et al. Attention-Based Network for Low-Light Image Enhancement // Proc of the IEEE International Conference on Multimedia and Expo. Washington, USA: IEEE, 2020. DOI: 10.1109/ICME46284.2020.9102774. [18] LÜ F F, LI Y, LU F. Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset. Internatio-nal Journal of Computer Vision, 2021, 129(7): 2175-2193. [19] HAO P C, WANG S, LI S P, et al. Low-Light Image Enhancement Based on Retinex and Saliency Theories // Proc of the Chi-nese Automation Congress. Washington, USA: IEEE, 2019: 2594-2597. [20] XU X, WANG S Q, WANG Z, et al. Exploring Image Enhancement for Salient Object Detection in Low Light Images. ACM Transactions on Multimedia Computing, Communications, and Applications, 2021, 17(1). DOI: 10.1145/3414839. [21] XU K, YANG X, YIN B C, et al. Learning to Restore Low-Light Images via Decomposition-and-Enhancement // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 2278-2287. [22] WANG X T, YU K, DONG C, et al. Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 606-615. [23] WU Z, SU L, HUANG Q M. Cascaded Partial Decoder for Fast and Accurate Salient Object Detection // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 3902-3911. [24] WANG W J, WEI C, YANG W H, et al. GLADNet: Low-Light Enhancement Network with Global Awareness // Proc of the 13th IEEE International Conference on Automatic Face and Gesture Recognition. Washington, USA: IEEE, 2018: 751-755.