|
|
Spatio-Temporal IoU Constraints Based Adversarial Defense Method for Object Tracking |
SHENG Jingjing1, ZHANG Dawei1,2, CAI Tingyi1,2, XIAO Xin2, ZHENG Zhonglong1,2, JIANG Yunliang1,2 |
1. School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004; 2. Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua 321004 |
|
|
Abstract With the wide application of deep learning in the field of visual tracking, adversarial attack is one of key factors affecting the model performance. However, the research on defense methods for adversarial attack is still in the initial stage. Therefore, a spatio-temporal intersection over union(IoU) constraints based adversarial defense method for object tracking is proposed. In this method, Gaussian noise constraints are firstly added to the adversarial examples. Then, according to the tangent direction of the noise contour, the tangential constraint with the same noise level and the highest spatio-temporal IoU score is selected. The normal constraint is utilized to update the defense target towards the direction of the original image, and the normal and tangential constraints are orthogonally combined and optimized. Finally, the combined vector with the highest spatio-temporal IoU score and the lowest noise level is selected as the best constraint, and it is added to the adversarial example image and transferred to the next frame image, thereby realizing temporal defense. Experiments on VOT2018, OTB100, GOT-10k and LaSOT tracking datasets verify the validity of the proposed method.
|
Received: 21 February 2024
|
|
Fund:National Natural Science Foundation of China(No.62272419), Natural Science Foundation of Zhejiang Province(No.LQ23F020010,LZ22F020010), Jinhua Science and Technology Plan Project(No.2023-4-016). |
Corresponding Authors:
ZHENG Zhonglong, Ph.D., professor. His research inte-rests include pattern recognition, machine lear-ning and image processing.
|
About author:: SHENG Jingjing, Master student. Her research interests include computer vision and adversarial attack and defense. ZHANG Dawei, Ph.D., lecturer. His research interests include deep learning and computer vision. CAI Tingyi, Ph.D. candidate. Her research interests include graph neural networks and graph representation learning. XIAO Xin, Ph.D. candidate. Her research interests include artificial intelligence and intelligent education. JIANG Yunliang, Ph.D., professor. His research interests include intelligent information processing and geographic information system. |
|
|
|
[1] 卢湖川,李佩霞,王栋. 目标跟踪算法综述.模式识别与人工智能, 2018, 31(1): 61-76. (LU H C, LI P X, WANG D. Visual Object Tracking: A Survey. Pattern Recognition and Artificial Intelligence, 2018, 31(1): 61-76.) [2] 刘彩虹,张磊,黄华. 交通路口监控视频跨视域多目标跟踪的可视化.计算机学报, 2018, 41(1): 221-235. (LIU C H, ZHANG L, HUANG H. Visualization of Cross-View Multi-object Tracking for Surveillance Videos in Crossroad. Chinese Journal of Computers, 2018, 41(1): 221-235.) [3] MUELLER M, SMITH N, GHANEM B. A Benchmark and Simulator for UAV Tracking // Proc of the 14th European Conference on Computer Vision. Berlin, Germany: Springer, 2016: 445-461. [4] LI S W, YANG Y X, ZENG D, et al. Adaptive and Background-Aware Vision Transformer for Real-Time UAV Tracking // Proc of the IEEE/CVF International Conference on Computer Vision. Wa-shington, USA: IEEE, 2023: 13943-13954. [5] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and Har-nessing Adversarial Examples[C/OL]. [2024-01-13].https://arxiv.org/pdf/1412.6572. [6] TRAMÈR F, KURAKIN A, PAPERNOT N, et al. Ensemble Adversarial Training: Attacks and Defenses[C/OL].[2024-01-13]. https://arxiv.org/pdf/1705.07204. [7] SAMANGOUEI P,KABKAB M,CHELLAPPA R. Defense GAN:Protecting Classifiers Against Adversarial Attacks Using Generative Models[C/OL]. [2024-01-13].https://arxiv.org/pdf/1805.06605. [8] 闫嘉乐,徐洋,张思聪,等. 图像分类模型的对抗样本攻防研究综述.计算机工程与应用, 2022, 58(23): 24-41. (YAN J L, XU Y, ZHANG S C, et al. Survey of Research on Adversarial Examples Attack and Defense in Image Classification Mo-del. Computer Engineering and Applications, 2022, 58(23): 24-41.) [9] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards Deep Learning Models Resistant to Adversarial Attacks[C/OL].[2024-01-13]. https://arxiv.org/pdf/1706.06083. [10] KELLER Y, MACKENSEN J, EGER S. BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks // Proc of the Findings of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2021: 1616-1629. [11] GUPTA P,RAHTU E. CIIDefence:Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 6707-6716. [12] AKHTAR N, LIU J,MIAN A. Defense Against Universal Adversarial Perturbations // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 3389-3398. [13] RAFF E, SYLVESTER J, FORSYTH S, et al. Barrage of Random Transforms for Adversarially Robust Defense // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 6521-6530. [14] GUO C, RANA M, CISSÉ M, et al. Countering Adversarial Images Using Input Transformations[C/OL].[2024-01-13]. https://arxiv.org/abs/1711.00117. [15] YAN B, WANG D, LU H C, et al. Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 987-996. [16] HOSSEINI H, CHEN Y Z, KANNAN S, et al. Blocking Transfe-rability of Adversarial Examples in Black-Box Learning Systems[C/OL].[2024-01-13]. https://arxiv.org/abs/1703.04318. [17] MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks // Proc of the IEEE Conference on Computer Vision and Pattern Re-cognition. Washington, USA: IEEE, 2016: 2574-2582. [18] MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal Adversarial Perturbations // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 86-94. [19] DONG Y P, SU H, WU B Y, et al. Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 7706-7714. [20] ILYAS A, ENGSTROM L, ATHALYE A, et al. Black-Box Adversarial Attacks with Limited Queries and Information. Journal of Machine Learning Research, 2018, 80: 2137-2146. [21] LIU Y P, CHEN X Y, LIU C, et al. Delving into Transferable Adversarial Examples and Black-Box Attacks[C/OL].[2024-01-13]. https://arxiv.org/abs/1611.02770. [22] SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Re-cognition // Proc of the ACM SIGSAC Conference on Computer and Communications Security. New York, USA: ACM, 2016: 1528-1540. [23] WIYATNO R R, XU A Q. Physical Adversarial Textures That Fool Visual Object Tracking // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 4821-4830. [24] ATHALYE A, ENGSTROM L, ILYAS A, et al. Synthesizing Robust Adversarial Examples[C/OL].[2024-01-13]. https://arxiv.org/pdf/1707.07397. [25] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing Properties of Neural Networks[C/OL].[2024-01-13]. https://arxiv.org/pdf/1312.6199. [26] CHAO X W, LI B, ZHU J Y, et al. Generating Adversarial Examples with Adversarial Networks // Proc of the International Joint Conference on Artificial Intelligence. San Francisco, USA: IJCAI,2018: 3905-3911. [27] LI Q, HU Y X, LIU Y, et al. Discrete Point-Wise Attack Is not Enough: Generalized Manifold Adversarial Attack for Face Recognition // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2023: 20575-20584. [28] CHEN X S, YAN X Y, ZHENG F, et al. One-Shot Adversarial Attacks on Visual Tracking with Dual Attention // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 10173-10182. [29] JIA S, MA C, SONG Y B, et al. Robust Tracking Against Adversarial Attacks // Proc of the 16th European Conference on Compu-ter Vision. Berlin, Germany: Springer, 2020: 69-84. [30] JIA S, SONG Y B, MA C, et al. IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking // Proc of the IEEE/CVF Conference on Computer Vision and Pa-ttern Recognition. Washington, USA: IEEE, 2021: 6705-6714. [31] CHENG Z Y, WU B Y, ZHANG Z Y, et al. TAT: Targeted Backdoor Attacks Against Visual Object Tracking. Pattern Recognition, 2023, 142. DOI: 10.1016/j.patcog.2023.109629. [32] QIU S L, LIU Q H, ZHOU S J, et al. Review of Artificial Intelligence Adversarial Attack and Defense Technologies. Applied Sciences, 2019, 9(5). DOI: 10.3390/app9050909. [33] DZIUGAITE G K, GHAHRAMANI Z, ROY D M. A Study of the Effect of JPG Compression on Adversarial Images[C/OL]. [2024-01-13].https://arxiv.org/pdf/1608.00853. [34] DAS N, SHANBHOGUE M, CHEN S T, et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression[C/OL].[2024-01-13]. https://arxiv.org/pdf/1705.02900. [35] XIE C H, WANG J Y, ZHANG Z S, et al. Adversarial Examples for Semantic Segmentation and Object Detection // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 1378-1387. [36] WANG Q L, GUO W B, ZHANG K X, et al. Learning Adversary-Resistant Deep Neural Networks[C/OL].[2024-01-13]. https://arxiv.org/abs/1612.01401. [37] KIM W J, CHO Y, JUNG J, et al. Feature Separation and Recalibration for Adversarial Robustness // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2023: 8183-8192. [38] LIAO F Z, LIANG M, DONG Y P, et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 1778-1787. [39] TSAI Y Y, HSIUNG L, CHEN P Y, et al. Generalizing Adversarial Training to Composite Semantic Perturbations[C/OL].[2024-01-13]. https://openreview.net/pdf/16422226bc6ff811dd31cfb3a8b89e49ce30cf72.pdfhttps://openreview.net/pdf/16422226bc6ff811dd31cfb3a8b89e49ce30cf72.pdf. [40] SUTTAPAK W, ZHANG J F, ZHAO H H, et al. Multi-model U-Net: An Adversarial Defense Mechanism for Robust Visual Trac-king. Neural Processing Letters, 2024, 56. DOI: 10.1007/s11063-024-11592-2. |
|
|
|