Abstract:The process of manually rendering traditional Chinese meticulous flower painting is complicated and highly skilled. The existing automatic line drawing colorization is difficult to generate natural and reasonable gradient effect. On the basis of condition generative adversarial network(CGAN), an interactive meticulous flower coloring algorithm via attention guidance is proposed to accomplish the colorization of meticulous flowers from line drawing. A color attention map depicting the color category and layout of flowers is designed to guide the proposed network to learn important color features in the training stage. The color attention map is considered as the means of interaction between the user and the system for color design in the application stage. In the network structure design, a local color-coding sub-network is constructed and trained to encode the flower color attention map. The encoded information is introduced into the conditional normalization process of each layer of the generator as an affine parameter to accomplish learning and controlling of colors. Since the depth features emphasize global semantic information, the local high-frequency information reflecting line features might be lost. A cross-layer connection structure is introduced into the generator network to strengthen the learning of line features. Experimental results show that the proposed algorithm renders line drawing of flowers better into meticulous flowers and the generated images are accordant with the color distribution and characteristics of real meticulous flowers with good artistic reality and appreciation.
[1] LEVIN A, LISCHINSKI D, WIESS Y. Colorization Using Optimization. ACM Transactions on Graphics, 2004, 23(3): 689-694. [2] HUANG Y C, TUNG Y S, CHEN J C, et al. An Adaptive Edge Detection Based Colorization Algorithm and Its Applications // Proc of the 13th ACM International Conference on Multimedia. New York, USA: ACM, 2005: 351-354. [3] QU Y G, WONG T T, HENG P A. Manga Colorization. ACM Transactions on Graphics, 2006, 25(3): 1214-1220. [4] YAO C Y, HUNG S H, LI G W, et al. Manga Vectorization and Manipulation with Procedural Simple Screentone. IEEE Transactions on Visualization and Computer Graphics, 2016, 23(2): 1070-1084. [5] LIU X T, LI C Z, WONG T T. Boundary-Aware Texture Region Segmentation from Manga. Computational Visual Media, 2017, 3(1): 61-71. [6] SÝKORA D, INGLIANA J, COLLINS S. Lazybrush: Flexible Pain-ting Tool for Hand-Drawn Cartoons. Computer Graphics Forum, 2009, 28(2): 599-608. [7] YOO S, BAHNG H, CHUNG S, et al. Coloring with Limited Data: Few-Shot Colorization via Memory Augmented Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 11283-11292. [8] FRANS K. Outline Colorization through Tandem Adversarial Networks[C/OL]. [2020-03-15]. https://arxiv.org/pdf/1704.08834.pdf. [9] SANGKLOY P, LU J W, FANG C, et al. Scribbler: Controlling Deep Image Synthesis with Sketch and Color // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 6836-6845. [10] ZHU J Y, PARK T, ISOLA P, et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 2223-2232. [11] CI Y Z, MA X Z, WANG Z H, et al. User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks // Proc of the ACM Conference on Multimedia. New York, USA: ACM, 2018: 1536-1544. [12] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-Image Translation with Conditional Adversarial Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017. DOI: 10.1109/CVPR.2017.632. [13] ZHANG L M, LI C Z, WONG T T, et al. Two-Stage Sketch Colo-rization. ACM Transactions on Graphics, 2018, 37(6): 261:1-261:14. [14] SILVA F C, DE CASTRO P A L, JU'NIOR H R, et al. Mangan: Assisting Colorization of Manga Characters Concept Art Using Conditional GAN // Proc of the IEEE International Conference on Image Processing. Washington, USA: IEEE, 2019: 3257-3261. [15] LEE J, KIM E, LEE Y, et al. Reference-Based Sketch Image Co-lorization Using Augmented-Self Reference and Dense Semantic Correspondence[C/OL]. [2020-03-15]. https://arxiv.org/pdf/2005.05207.pdf. [16] KIM H, JHOO H Y, PARK E, et al. Tag2pix: Line Art Colorization Using Text Tag with SECat and Changing Loss // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2019: 9056-9065. [17] 顾 杨,陈昭炯,陈 灿,等.基于 CGAN 的中国山水画布局可调的仿真生成方法.模式识别与人工智能, 2019, 32(9): 844-854. (GU Y, CHEN Z J, CHEN C, et al. Layout Adjustable Simulated Generation Method for Chinese Landscape Paintings Based on CGAN. Pattern Recognition and Artificial Intelligence, 2019, 32(9): 844-854.) [18] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Gene-rative Adversarial Nets // GHAHRAMANI Z, WELLING M, CORTES C, et al., eds. Advances in Neural Information Processing Systems 27. Cambridge, USA: The MIT Press, 2014: 2672-2680. [19] MIRZA M, OSINDERO S. Conditional Generative Adversarial Nets[C/OL]. [2020-03-15]. https://arxiv.org/pdf/1411.1784.pdf. [20] HUANG X, BELONGIE S. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 1501-1510. [21] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional Networks for Biomedical Image Segmentation // Proc of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin, Germany: Springer, 2015: 234-241. [22] YOO D, KIM N, PARK S, et al. Pixel-Level Domain Transfer // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2016: 517-532. [23] WINNEMÖLLER H, KYPRIANIDIS J E, OLSEN S C. XDoG: An Extended Difference-of-Gaussians Compendium Including Advan-ced Image Stylization. Computers and Graphics, 2012, 36(6): 740-753.