|
|
Generation of Localized and Visible Adversarial Perturbations |
ZHOU Xingyu1,2, PAN Zhisong2, HU Guyu2, DUAN Yexin2,3 |
1. Communication Engineering College, Army Engineering University of PLA, Nanjing, 210007; 2. Command and Control Engineering College, Army Engineering University of PLA, Nanjing, 210007; 3. Zhenjiang Campus, Army Military Transportation University, Zhenjiang 212003 |
|
|
Abstract Deep neural network is susceptible to the disturbance of adversarial attacks. Based on the generative adversarial networks, a novel model of GAN for generating localized and visible adversarial perturbation(G2LVAP) is proposed. Firstly, the attacked classification network is designated as a discriminator, and its parameters are fixed during the training process. The generator model is constructed to generate localized and visible adversarial perturbations by optimizing fooling loss, diversity loss and distance loss. The generated perturbations can be placed anywhere in different input examples to attack the classification network. Finally, a class comparison method is proposed to analyze the effectiveness of localized and visible adversarial perturbations. Experiments on public image classification datasets indicate that G2LVAP produces a satisfactory attack effect.
|
Received: 28 August 2019
|
|
Fund:Supported by National Key Research and Development Program of China(No.2017YFB0802800), National Natural Science Foundation of China(No.61473149) |
Corresponding Authors:
PAN Zhisong, Ph.D., professor. His research interests include pa-ttern recognition and machine learning.
|
About author:: ZHOU Xingyu, Ph.D. candidate, lec-turer. His research interests include computer vision and adversarial examples.HU Guyu, Ph.D., professor. His research interests include computer network, communication network management and network inte-lligent technology.DUAN Yexin, Ph.D. candidate, lecturer. His research interests include adversarial exam-ples and image recognition. |
|
|
|
[1] BIGGIO B, FUMERA G, ROLI F. Pattern Recognition Systems under Attack: Design Issues and Research Challenges. International Journal of Pattern Recognition and Artificial Intelligence, 2014, 28(7). DOI: 10.1142/S0218001414600027. [2] HUANG L, JOSEPH A D, NELSON B, et al. Adversarial Machine Learning // Proc of the 4th ACM Workshop on Security and Artificial Intelligence. New York, USA: ACM, 2011: 43-58. [3] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing Properties of Neural Networks[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1312.6199.pdf. [4] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and Har-nessing Adversarial Examples[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1412.6572.pdf. [5] BRENDEL W, RAUBER J, BETHGE M. Decision-Based Adversa-rial Attacks: Reliable Attacks against Black-Box Machine Learning Models[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1712.04248.pdf. [6] CARLINI N, WAGNER D. Towards Evaluating the Robustness of Neural Networks // Proc of the IEEE Symposium on Security and Privacy. Washington, USA: IEEE, 2017: 39-57. [7] CHEN P Y, SHARMA Y, ZHANG H, et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1709.04114.pdf. [8] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 2574-2582. [9] MOPURI K R, GANESHAN A, BABU R V. Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(10): 2452-2465. [10] HAYES J, DANEZIS G. Learning Universal Adversarial Perturbations with Generative Models // Proc of the IEEE Security and Privacy Workshops. Washington, USA: IEEE, 2018: 43-49. [11] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal Adversarial Perturbations // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 1765-1773. [12] MOPURI K R, OJHA U, GARG U, et al. NAG: Network for Adversary Generation // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 742-751. [13] KARMON D, ZORAN D, GOLDBERG Y. LaVAN: Localized and Visible Adversarial Noise[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1801.02608.pdf. [14] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Robustness of Classifiers to Universal Perturbations: A Geometric Perspective[C/OL]. [2019-07-28]. https://openreview.net/pdf?id=ByrZyglCb. [15] BROWN T B, MANÉ D, ROY A, et al. Adversarial Patch[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1712.09665.pdf. [16] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Gene-rative Adversarial Nets // Proc of the 27th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2014, I: 2672-2680. [17] KINGMA D P, BA J L. ADAM: A Method for Stochastic Optimization[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1412.6980v8.pdf. [18] DENG J, DONG W, SOCHER R, et al. ImageNet: A Large-Scale Hierarchical Image Database // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2009: 248-255. [19] SZEGEDY C, VANHOUCKE V, LOFFE S, et al. Rethinking the Inception Architecture for Computer Vision // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 2818-2826. [20] LEI N, LUO Z X, YAU S T, et al. Geometric Understanding of Deep Learning[C/OL]. [2019-07-28]. https://arxiv.org/pdf/1805.10451.pdf. |
|
|
|