|
|
Class-Incremental Learning Method Based on Feature Space Augmented Replay and Bias Correction |
SUN Xiaopeng1, YU Lu1, XU Changsheng2 |
1. School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300382; 2. State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190 |
|
|
Abstract The problem of catastrophic forgetting arises when the network learns new knowledge continuously. Various incremental learning methods are proposed to solve this problem and one mainstream approach is to balance the plasticity and stability of incremental learning through storing a small amount of old data and replaying it. However, storing data from old tasks can lead to memory limitations and privacy breaches. To address this issue, a class-incremental learning method based on feature space augmented replay and bias correction is proposed to alleviate catastrophic forgetting. Firstly, the mean feature of an intermediate layer for each class is stored as its representative prototype and the low-level feature extraction network is frozen to prevent prototype drift. In the incremental learning stage, the stored prototypes are enhanced and replayed through geometric translation transformation to maintain the decision boundaries of the previous task. Secondly, bias correction is proposed to learn classification weights for each task, further correcting the problem of model classification bias towards new tasks. Experiments on four benchmark datasets show that the proposed method outperforms the state-of-the-art algorithms.
|
Received: 20 June 2024
|
|
Fund:National Natural Science Foundation of China(No.62202331) |
Corresponding Authors:
YU Lu, Ph.D., associate professor. Her research interests include continuous learning and representation learning.
|
About author:: SUN Xiaopeng, Master student. His research interests include continuous learning. XU Changsheng, Ph.D., professor. His research interests include multimedia analysis/indexing/retrieval, pattern recognition and computer vision. |
|
|
|
[1] HE K M, ZHANG X Y, REN S Q, et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification // Proc of the IEEE International Conference on Computer Vision . Washington, USA: IEEE, 2015: 1026-1034. [2] KIRKPATRICK J, PASCANU R, RABINOWITZ N, et al. Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521-3526. [3] KEMKER R, MCCLURE M, ABITINO A, et al. Measuring Catastrophic Forgetting in Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 3390-3398. [4] LI G P, XU Y, DING J, et al. Toward Generic and Controllable Attacks Against Object Detection. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62. DOI: 10.1109/TGRS.2024.3417958. [5] JEEVESWARAN K, BHAT P S, ZONOOZ B, et al. BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning. Journal of Machine Learning Research, 2023, 202: 14817-14835. [6] CHEN X W, CHANG X B.Dynamic Residual Classifier for Class Incremental Learning // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2023: 18697-18706. [7] 周大蔚,汪福运,叶翰嘉,等.基于深度学习的类别增量学习算法综述.计算机学报, 2023, 46(8): 1577-1605. (ZHOU D W, WANG F Y, YE H J,et al. Deep Learning for Class-Incremental Learning: A Survey. Chinese Journal of Compu-ters, 2023, 46(8): 1577-1605.) [8] MASANA M, LIU X L, TWARDOWSKI B, et al. Class-Incremental Learning: Survey and Performance Evaluation on Image Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(5): 5513-5533. [9] ZHOU D W, WANG Q W, QI Z H, et al. Deep Class-Incremental Learning: A Survey[C/OL].[2024-05-19]. https://arxiv.org/pdf/2302.03648. [10] GOU J P, YU B S, MAYBANK S J, et al. Knowledge Distillation: A Survey. International Journal of Computer Vision, 2021, 129: 1789-1819. [11] LI Z Z, HOIEM D.Learning without Forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(12): 2935-2947. [12] ZHU K, ZHAI W, CAO Y, et al. Self-Sustaining Representation Expansion for Non-exemplar Class-Incremental Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2022: 9286-9295. [13] LIU Y Y, SCHIELE B, SUN Q R.Adaptive Aggregation Networks for Class-Incremental Learning // Proc of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2021: 2544-2553. [14] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. ICaRL: Incremental Classifier and Representation Learning // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 5533-5542. [15] CASTRO F M, MARÍN-JIMÉNEZ M J, GUIL N, et al. End-to-End Incremental Learning // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 241-257. [16] WU Y, CHEN Y P, WANG L J, et al. Large Scale Incremental Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 374-382. [17] SHIN H, LEE J K, KIM J, et al. Continual Learning with Deep Generative Replay // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge,USA: MIT Press, 2017: 2994-3003. [18] ZHAI M Y, CHEN L, TUNG F, et al. Lifelong GAN: Continual Learning for Conditional Image Generation // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 2759-2768. [19] XIANG Y, FU Y, JI P, et al. Incremental Learning Using Conditional Adversarial Networks // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 6618-6627. [20] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al.Ge-nerative Adversarial Nets // Proc of the 27th International Confe-rence on Neural Information Processing Systems. Cambridge,USA: MIT Press, 2014, II: 2672-2680. [21] KEMKER R, KANAN C.FearNet: Brain-Inspired Model for Incremental Learning[C/OL]. [2024-05-19].https://arxiv.org/pdf/1711.10563v1. [22] ZHU F, ZHANG X Y, WANG C, et al. Prototype Augmentation and Self-Supervision for Incremental Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2021: 5867-5876. [23] SHI W X, YE M.Prototype Reminiscence and Augmented Asy-mmetric Knowledge Aggregation for Non-exemplar Class-Incremental Learning // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2023: 1772-1781. [24] PETIT G, POPESCU A, SCHINDLER H, et al. FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning // Proc of the IEEE/CVF Winter Conference on Applications of Computer Vision. Washington, USA: IEEE, 2023: 3900-3909. [25] HOU S H, PAN X Y, LOY C C, et al. Learning a Unified Classifier Incrementally via Rebalancing // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 831-839. [26] ZEILER M D, FERGUS R.Visualizing and Understanding Convolutional Networks // Proc of the 13th European Conference on Computer Vision. Berlin, Germany: Springer, 2014: 818-833. [27] KRIZHEVSKY A. Learning Multiple Layers of Features from Tiny Images[EB/OL]. [2024-05-19]. http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. [28] LE Y, YANG X. Tiny ImageNet Visual Recognition Challenge[EB/OL]. [2024-05-19]. http://vision.stanford.edu/teaching/cs231n/reports/2015/pdfs/yle_project.pdf. [29] DENG J, DONG W, SOCHER R, et al. ImageNet: A Large-Scale Hierarchical Image Database // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2009: 248-255. [30] BOSSARD L, GUILLAUMIN M, VAN GOOL L.Food-101-Mining Discriminative Components with Random Forests // Proc of the 13th European Conference on Computer Vision. Berlin, Germany: Springer, 2014: 446-461. [31] ALJUNDI R.Continual Learning in Neural Networks[C/OL]. [2024-05-19].https://arxiv.org/pdf/1910.02718. [32] SMITH J, HSU Y C, BALLOCH J, et al. Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2021: 9354-9364. [33] ZHU F, CHENG Z, ZHANG X Y, et al. Class-Incremental Lear-ning via Dual Augmentation[C/OL].[2024-05-19].https://proceedings.neurips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf. [34] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778. [35] YAN S P, XIE J W, HE X M.DER: Dynamically Expandable Representation for Class Incremental Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2021: 3013-3022. |
|
|
|