A large amount of prior composition information of Chinese characters is required for the generation of calligraphic Chinese characters. Moreover, the previous data collection is demanding work, and the scalability of the research results is easily affected. To solve this problem, a method of Chinese calligraphy characters generation based on structure constraint using conditional stack generative adversarial networks is proposed. The Chinese character handwriting extracted directly from the source Chinese character image is considered as the structure constraint condition. High-quality calligraphic Chinese characters are generated by the condition stack generative adversarial network model. A semi-supervised learning method based on pseudo target samples is proposed for the dataset lacking of calligraphic Chinese characters. Furthermore, the unseen calligraphic Chinese characters during training are generated as well. Experiments show the proposed method can generate higher-quality calligraphy Chinese characters under the premise of using a few samples of a specific style of calligraphic Chinese character dataset.
[1] MA Y L, DONG Y T, LI K, et al. A Survey of Chinese Character Style Transfer // Proc of the Chinese Conference on Image and Graphics Technologies. Berlin, German: Springer, 2019: 392-404.
[2] SHEN W, ZHAO K, JIANG Y, et al. Object Skeleton Extraction in Natural Images by Fusing Scale-Associated Deep Side Outputs // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 222-230.
[3] XIE S N, TU Z W. Holistically-Nested Edge Detection // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2015: 1395-1403.
[4] LONG J, SHELHAMER E, DARRELL T. Fully Convolutional Networks for Semantic Segmentation // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2015: 3431-3440.
[5] NOH H, HONG S, HAN B. Learning Deconvolution Network for Semantic Segmentation // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2015: 1520-1528.
[6] JOHNSON J, ALAHI A, LI F F. Perceptual Losses for Real-Time Style Transfer and Super-Resolution // Proc of the European Confe-rence on Computer Vision. Berlin, German: Springer, 2016: 694-711.
[7] CHANG H W, LU J W, YU F, et al. PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 40-48.
[8] GAO Y, GUO Y, LIAN Z H, et al. Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. ACM Transactions on Graphics, 2019, 38(6): 185:1-185:12.
[9] LUAN Q, WEN F, COHEN-OR D, et al. Natural Image Colorization // Proc of the 18th Eurographics Conference on Rendering Techniques. New York, USA: ACM, 2007: 309-320.
[10] ZHANG R, ISOLA P, EFROS A A. Colorful Image Colorization // Proc of the European Conference on Computer Vision. Berlin, German: Springer, 2016: 649-666.
[11] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-Image Translation with Conditional Adversarial Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 5967-5976.
[12] ZHANG H, SINDAGI V, PATEL V M. Image De-raining Using a Conditional Generative Adversarial Network. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 30(11): 3943-3956.
[13] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional Networks for Biomedical Image Segmentation // Proc of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin, German: Springer, 2015: 234-241.
[14] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Nets // Proc of the 27th International Confe-rence on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2014, II: 2672-2680.
[15] HINTON G E, SALAKHUTDINOV R R. Reducing the Dimensionality of Data with Neural Networks. Science, 2006, 313(5786): 504-507.
[16] LIU M Y, TUZEL O. Coupled Generative Adversarial Networks // Proc of the 30th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2016: 469-477.
[17] LIU M Y, BREUEL T, KAUTZ J. Unsupervised Image-to-Image Translation Networks // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2017: 700-708.
[18] ZHU J Y, PARK T, ISOLA P, et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 2242-2251.
[19] ATARSAIKHAN G, LWANA B K, UCHIDA S. Neural Style Di-fference Transfer and Its Application to Font Generation // Proc of the International Workshop on Document Analysis Systems. Berlin, German: Springer, 2020: 544-558.
[20] XI Y K, YAN G L, HUA J, et al. JointFontGAN: Joint Geometry-Content GAN for Font Generation via Few-Shot Learning // Proc of the 28th ACM International Conference on Multimedia. New York, USA: ACM, 2020: 4309-4317.
[21] ODENA A, OLAH C, SHLENS J. Conditional Image Synthesis with Auxiliary Classifier GANs // Proc of the 34th International Conference on Machine Learning. New York, USA: ACM, 2017: 2642-2651.
[22] LYU P, BAI X, YAO C, et al. Auto-encoder Guided GAN for Chinese Calligraphy Synthesis // Proc of the 14th IAPR International Conference on Document Analysis and Recognition. Washington, USA: IEEE, 2017: 1095-1100.
[23] LIAN Z H, ZHAO B, CHEN X D, et al. EasyFont: A Style Learning-Based System to Easily Build Your Large-Scale Handwri-ting Fonts. ACM Transactions on Graphics, 2019, 38(1): 6:1-6:18.
[24] AZADI S, FISHER M, KIM V, et al. Multi-content GAN for Few-Shot Font Style Transfer // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 7564-7573.
[25] SUN D Y, REN T Z, LI C X, et al. Learning to Write Stylized Chinese Characters by Reading a Handful of Examples // Proc of the 27th International Joint Conference on Artificial Intelligence. San Francisco, USA: Morgan Kaufmann, 2018: 920-927.
[26] WU S J, YANG C Y, HSU J Y. CalliGAN: Style and Structure-Aware Chinese Calligraphy Character Generator[C/OL]. [2020-11-23]. https://arxiv.org/pdf/2005.12500.pdf.
[27] 史聪伟,赵杰煜,常俊生.基于中轴变换的骨架特征提取算法.计算机工程, 2019, 45(7): 242-250.
(SHI C W, ZHAO J Y, CHANG J S. Skeleton Feature Extraction Algorithm Based on Medial Axis Transformation.Computer Engineering, 2019, 45(7): 242-250.)
[28] COHEN T S, WELLING M. Group Equivariant Convolutional Networks // Proc of the 33rd International Conference on Machine Learning. New York, USA: ACM, 2016: 2990-2999.
[29] BALDI P. Autoencoders, Unsupervised Learning, and Deep Architectures[C/OL]. [2020-11-23]. http://proceedings.mlr.press/v27/baldi12a/baldi12a.pdf.
[30] WANG Z, BOVIK A C, SHEIKH H R, et al. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.