Abstract:One of the research emphases of image inpainting based on deep learning is to generate color, edge and texture. However, generation methods of these three important properties need to be further improved. A three-stage generative network is proposed, and three stages tend to synthesize colors, edges and textures respectively. Specifically, the global color of the image is reconstructed in the HSV color space at the HSV color generation stage to provide color guidance for image inpainting. An edge learning framework is designed at the edge optimization stage to obtain more accurate edge information. At the texture synthesis stage, a decoder with feature bidirectional fusion is designed to enhance the details of the image. The three stages are successively connected, and each stage plays an important role in improving the performance of image inpainting. Extensive experiments demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
[1] ZHANG X B, ZHAI D H, LI T R, et al. Image Inpainting Based on Deep Learning: A Review. Information Fusion, 2023, 90: 74-94. [2] 李月龙,高云,闫家良,等.基于深度神经网络的图像缺损修复方法综述.计算机学报, 2021, 44(11): 2295-2316. (LI Y L, GAO Y, YAN J L, et al. Image Inpainting Methods Based on Deep Neural Networks: A Review. Chinese Journal of Computers, 2021, 44(11): 2295-2316.) [3] ZENG Y H, FU J L, CHAO H Y, et al. Aggregated Contextual Transformations for High-Resolution Image Inpainting. IEEE Tran-sactions on Visualization and Computer Graphics, 2022. DOI: 10.1109/TVCG.2022.3156949. [4] LING H, KREIS K, LI D Q, et al. EditGAN: High-Precision Semantic Image Editing[C/OL].[2022-11-20]. https://arxiv.org/pdf/2111.03186v1.pdf. [5] LIU Y, SUN P, WERGELES N, et al. A Survey and Performance Evaluation of Deep Learning Methods for Small Object Detection. Expert Systems with Applications, 2021, 172. DOI: 10.1016/j.eswa.2021.114602. [6] WAN Z Y, ZHANG B, CHEN D, et al. Old Photo Restoration via Deep Latent Space Translation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(2): 2071-2087. [7] LI H D, LUO W Q, HUANG J W. Localization of Diffusion-Based Inpainting in Digital Images. IEEE Transactions on Information Forensics and Security, 2017, 12(12): 3050-3064. [8] GHORAI M, SAMANTA S, MANDAL S, et al. Multiple Pyramids Based Image Inpainting Using Local Patch Statistics and Steering Kernel Feature. IEEE Transactions on Image Processing, 2019, 28(11): 5495-5509. [9] PATHAK D, KRÄHENBÜHL P, DONAHUE J, et al. Context Encoders: Feature Learning by Inpainting // Proc of the IEEE Confe-rence on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 2536-2544. [10] GU J X, WANG Z H, KUEN J, et al. Recent Advances in Convolutional Neural Networks. Pattern Recognition, 2018, 77: 354-377. [11] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Ge-nerative Adversarial Nets // Proc of the 27th International Confe-rence on Neural Information Processing Systems. Cambridge, USA: MIT Press, II: 2672-2680. [12] IIZUKA S, SIMO-SERRA E, ISHIKAWA H. Globally and Locally Consistent Image Completion. ACM Transactions on Graphics, 2017, 36(4). DOI: 10.1145/3072959.3073659. [13] ZHANG Z D, WANG X R, JUNG C. DCSR: Dilated Convolutions for Single Image Super-Resolution. IEEE Transactions on Image Processing, 2018, 28(4): 1625-1635. [14] WANG Y, TAO X, QI X J, et al. Image Inpainting via Generative Multi-Column Convolutional Neural Networks // Proc of the 32nd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2018: 329-338. [15] ZENG Y H, FU J L, CHAO H Y, et al. Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 1486-1494. [16] YU J H, LIN Z, YANG J M, et al. Generative Image Inpainting with Contextual Attention // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 5505-5514. [17] LIU G L, REDA F A, SHIH K J, et al. Image Inpainting for Irre-gular Holes Using Partial Convolutions // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 89-105. [18] YU J H, LIN Z, YANG J M, et al. Free-Form Image Inpainting with Gated Convolution // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 4470-4479. [19] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-Image Translation with Conditional Adversarial Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 5967-5976. [20] REN Y R, YU X M, ZHANG R N, et al. StructureFlow: Image Inpainting via Structure-Aware Appearance Flow // Proc of the IEEE/CVF International Conference on Computer Vision. Washing-ton, USA: IEEE, 2019: 181-190. [21] XU L, YAN Q, XIA Y, et al. Structure Extraction from Texture via Relative Total Variation. ACM Transactions on Graphics, 2012, 31(6). DOI: 10.1145/2366145.2366158. [22] QIU J J, GAO Y. Position and Channel Attention for Image Inpainting by Semantic Structure // Proc of the 32nd IEEE International Conference on Tools with Artificial Intelligence. Washington, USA: IEEE, 2020: 1290-1295. [23] NAZERI K, NG E, JOSEPH T, et al. EdgeConnect: Structure Guided Image Inpainting Using Edge Prediction // Proc of the IEEE/CVF International Conference on Computer Vision Workshop. Washington, USA: IEEE, 2019: 3265-3274. [24] CANNY J. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(6): 679-698. [25] XU S X, LIU D, XIONG Z W. E2I: Generative Inpainting from Edge to Image. IEEE Transactions on Circuits and Systems for Vi-deo Technology, 2021, 31(4): 1308-1322. [26] GUO X F, YANG H Y, HUANG D. Image Inpainting via Conditional Texture and Structure Dual Generation // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2021: 14114-14123. [27] LIU H Y, JIANG B, SONG Y B, et al. Rethinking Image Inpain-ting via a Mutual Encoder-Decoder with Feature Equalizations // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 725-741. [28] WANG Y D, GUO J C, GAO H, et al. UIEC^2-Net: CNN-Based Underwater Image Enhancement Using Two Color Space. Signal Processing: Image Communication, 2021, 96. DOI: 10.1016/j.image.2021.116250. [29] ZHANG N, ZHAO Y, WANG C, et al. A Real-Time Semi-Supervised Deep Tone Mapping Network. IEEE Transactions on Multimedia, 2022, 24: 2815-2827. [30] SCHWARZ M W, COWAN W B, BEATTY J C. An Experimental Comparison of RGB, YIQ, LAB, HSV, and Opponent Color Mo-dels. ACM Transactions on Graphics, 1987, 6(2): 123-158. [31] PANDEY R K, SAHA N, KARMAKAR S, et al. MSCE: An Edge-Preserving Robust Loss Function for Improving Super-Resolution Algorithms // Proc of the International Conference on Neural Information Processing. Berlin, Germany: Springer, 2018: 566-575. [32] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778. [33] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional Block Attention Module // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 3-19. [34] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutio-nal Networks for Biomedical Image Segmentation // Proc of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin, Germany: Springer, 2015: 234-241. [35] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C/OL]. [2022-11-20]. https://arxiv.org/pdf/1409.1556.pdf. [36] KARRAS T, AILA T, LAINE S, et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation[C/OL].[2022-11-20]. https://arxiv.org/pdf/1710.10196.pdf. [37] KINGMA D P, BA J L. ADAM: A Method for Stochastic Optimization[C/OL]. [2022-11-20]. https://arxiv.org/pdf/1412.6980.pdf. [38] LI J Y, WANG N, ZHANG L F, et al. Recurrent Feature Reaso-ning for Image Inpainting // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 7757-7765. [39] YAMASHITA Y, SHIMOSATO K, UKITA N. Boundary-Aware Image Inpainting with Multiple Auxiliary Cues // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Washington, USA: IEEE, 2022: 618-628. [40] UDDIN S M N, JUNG Y J. Free-Form Image Inpainting Using Color Split-Inpaint-Fuse Approach. Computer Vision and Image Understanding, 2022, 221(C). DOI: 10.1016/j.cviu.2022.103446.