|
|
Research Advances on Theory of Open-Environment Machine Learning |
YUAN Xiaotong1, ZHANG Xuyao2,3, LIU Xi1, CHENG Zhen2,3, LIU Chenglin2,3 |
1. School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044; 2. State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190; 3. School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049 |
|
|
Abstract In an open environment, machine learning is faced with various challenges, including varying category sets, non-identically distributed data and noise interference. These challenges can result in a significant decline in the performance of traditional machine learning systems built under the closed-world assumption. Therefore, open-environment machine learning is a research focus on artificial intelligence. In this paper, the current status and recent important advances in the theoretical study of open-environment machine learning are discussed from the perspectives of generalization, optimization, robustness and performance measurement. For generalization theory, the advances on the generalization performance analysis of open-set detection, transfer/meta learning and sparse learning approaches are introduced. For optimization theory, the advances on the theoretical analysis of random and sparse optimization, online and continual optimization, as well as distributed and federated optimization approaches are introduced. For robustness theory, the advances on robust learning under adversarial samples, random noise and noisy labels are introduced. For performance measurement, a number of widely used performance measurement criterions for open-environment machine learning are introduced. Finally, some prospects on the theoretical research trends of open-environment machine learning are provided.
|
Received: 09 October 2023
|
|
Fund:Supported by National Key Research and Development Program of China(No.2018AAA0100400); National Natural Science Foundation of China(No.U21B2049); National Natural Science Foundation of China(No.61936005); National Natural Science Foundation of China(No.62222609); National Natural Science Foundation of China(No.62076236) |
Corresponding Authors:
YUAN Xiaotong, Ph.D., professor. His research interests include machine learning, stochastic optimization and computer vision.
|
About author:: ZHANG Xuyao, Ph.D., professor. His research interests include pattern recognition, machine learning and deep learning. LIU Xi, master student. Her research inte-rests include federated learning, transfer learning and pattern recognition. CHENG Zhen, Ph.D. candidate. His research interests include pattern recognition, machine learning and adversarial robustness. LIU Chenglin, Ph.D., professor. His research interests include pattern recognition, machine learning, and document analysis and recognition. |
|
|
|
[1] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep Learning.Cambridge, USA: MIT Press, 2016. [2] ZHOU Z H.Open-Environment Machine Learning. National Science Review, 2022, 9(8). DOI: 10.1093/nsr/nwac123. [3] ZHANG Y J, ZHAO P, MA L J H, et al. An Unbiased Risk Estimator for Learning with Augmented Classes[C/OL].[2023-08-22]. https://browse.arxiv.org/pdf/1910.09388.pdf. [4] VEDULA N, GUPTA R, ALOK A, et al. ADVIN: Automatically Discovering Novel Domains and Intents from User Text Utterances // Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Washington, USA: IEEE, 2022: 7627-7631. [5] FOULDS J, FRANK E.A Review of Multi-instance Learning Assump-tions. The Knowledge Engineering Review, 2010, 25(1): 1-25. [6] ZHANG X Y, LIU C L.Writer Adaptation with Style Transfer Ma-pping. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(7): 1773-1787. [7] MISRA I, SHRIVASTAVA A, GUPTA A, et al. Cross-Stitch Networks for Multi-task Learning // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 3994-4003. [8] ZHANG D Q, SHEN D G.Multi-modal Multi-task Learning for Joint Prediction of Multiple Regression and Classification Variables in Alzheimer's Disease. NeuroImage, 2012, 59(2): 895-907. [9] HOU C P, ZHOU Z H.One-Pass Learning with Incremental and Decremental Features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(11): 2776-2792. [10] HOU B J, ZHANG L J, ZHOU Z H.Learning with Feature Evo-lvable Streams. IEEE Transactions on Knowledge and Data Engineering, 2021, 33(6): 2602-2615. [11] DING Y X, ZHOU Z H.Preference Based Adaptation for Learning Objectives // Proc of the 32nd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2018: 7839-7848. [12] CORTES C, MOHRI M. AUC Optimization vs. Error Rate Minimization // Proc of the 16th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2003: 313-320. [13] BENDALE A, BOULT T.Towards Open World Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Re-cognition. Washington, USA: IEEE, 2015: 1893-1902. [14] LENG Q M, YE M, TIAN Q.A Survey of Open-World Person Re-Identification. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(4): 1092-1108. [15] FEI G L, WANG S, LIU B.Learning Cumulatively to Become More Knowledgeable // Proc of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA: ACM, 2016: 1565-1574. [16] SHI B X, WENINGER T.Open-World Knowledge Graph Completion. Proceeding of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 1957-1964. [17] PARMAR J, CHOUHAN S, RAYCHOUDHURY V, et al. Open-World Machine Learning: Applications, Challenges, and Opportunities. ACM Computing Surveys, 2023, 55(10): 1-37. [18] ZHANG X Y, LIU C L, SUEN C Y.Towards Robust Pattern Re-cognition: A Review. Proceedings of the IEEE, 2020, 108(6): 894-922. [19] LIU S, GARREPALLI R, HENDRYCKS D, et al. PAC Guarantees and Effective Algorithms for Detecting Novel Categories. Journal of Machine Learning Research, 2022, 23(1): 2128-2174. [20] FANG Z, LI Y X, LU J, et al. Is Out-of-Distribution Detection Learnable?[C/OL]. [2023-08-22]. https://browse.arxiv.org/pdf/2210.14707v1.pdf. [21] GOLDWASSER S, KALAI A T, KALAI Y T, et al.Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 11859-11870. [22] SHALEV-SHWARTZ S, BEN-DAVID S.Understanding Machine Learning: From Theory to Algorithms. Cambridge, UK: Cambridge University Press, 2014. [23] FANG Z, LU J, LIU F, et al.Open Set Domain Adaptation: Theo-retical Bound and Algorithm. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(10): 4309-4322. [24] FANG Z, LU J, LIU A J, et al. Learning Bounds for Open-Set Learning[C/OL].[2023-08-22]. https://arxiv.org/abs/2106.15792. [25] FRANC V, PRUSA D, VORACEK V.Optimal Strategies for Reject Option Classifiers. Journal of Machine Learning Research, 2023, 24(11): 1-49. [26] BARTLETT P L, WEGKAMP M H.Classification with a Reject Option using a Hinge Loss. Journal of Machine Learning Research, 2008, 9: 1823-1840. [27] NI C K, CHAROENPHAKDEE N, HONDA J, et al.On the Calibration of Multiclass Classification with Rejection // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2019: 2586-2596. [28] HOSPEDALES T, ANTONIOU A, MICAELLI P, et al.Meta-Lear-ning in Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5149-5169. [29] BAXTER J.A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research, 2000, 12: 149-198. [30] MAURER A, PONTIL M, ROMERA-PAREDES B.The Benefit of Multitask Representation Learning. Journal of Machine Learning Research, 2016, 17(1): 2853-2884. [31] MAURER A, PONTIL M. Excess Risk Bounds for Multitask Lear-ning with Trace Norm Regularization[C/OL]. [2023-08-22]. https://browse.arxiv.org/pdf/1212.1496.pdf. [32] TRIPURANENI N, JIN C, JORDAN M I.Provable Meta-Learning of Linear Representations // Proc of the 38th International Confe-rence on Machine Learning. San Diego, USA: JMLR, 2021: 10434-10443. [33] TRIPURANENI N, JORDAN M I, JIN C.On the Theory of Transfer Learning: The Importance of Task Diversity // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 7852-7862. [34] MAURER A.Algorithmic Stability and Meta-Learning. Journal of Machine Learning Research, 2005, 6: 967-994. [35] FINN C, ABBEEL P, LEVINE S.Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks // Proc of the 34th International Conference on Machine Learning. San Diego, USA: JMLR, 2017: 1126-1135. [36] KHODAK M, BALCAN M F, TALWALKAR A.Provable Guarantees for Gradient-Based Meta-Learning[C/OL]. [2023-08-22].https://browse.arxiv.org/pdf/1902.10644v2.pdf. [37] ZHOU P, ZOU Y T, YUAN X T, et al.Task Similarity Aware Meta Learning: Theory-Inspired Improvement on MAML // Proc of the 37th Conference on Uncertainty in Artificial Intelligence. San Diego, USA: JMLR, 2021: 23-33. [38] TIAN H D, LIU B, YUAN X T, et al.Meta-Learning with Network Pruning // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 675-700. [39] HASTIE T, TIBSHIRANI R, WAINWRIGHT M.Statistical Lear-ning with Sparsity: The Lasso and Generalizations. Boca Raton, USA: Chapman and Hall/CRC Press, 2015. [40] WAINWRIGHT M J.High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge, UK: Cambridge University Press, 2019. [41] ZHANG H, PATEL V M.Sparse Representation-Based Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1690-1696. [42] JIANG K, XIE W Y, LEI J, et al.LREN: Low-Rank Embedded Network for Sample-Free Hyperspectral Anomaly Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 2021,35(5): 4139-4146. [43] CHEN L Y, LEE S S.Best Subset Binary Prediction. Journal of Econometrics, 2018, 206(1): 39-56. [44] TIBSHIRANI R.Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society(Methodological), 1996, 58(1): 267-288. [45] YUAN X T, LI P, ZHANG T.Gradient Hard Thresholding Pursuit. Journal of Machine Learning Research, 2017, 18: 6027-6069. [46] VAN DE GEER S A. High-Dimensional Generalized Linear Models and the Lasso. The Annals of Statistics, 2008, 36(2): 614-645. [47] MAURER A, PONTIL M.Structured Sparsity and Generalization. Journal of Machine Learning Research, 2012, 13: 671-690. [48] YUAN X T, LI P. Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding // Proc of the 33rd Conference on Learning Theory. San Diego, USA: JMLR, 2020: 3787-3813. [49] YUAN X T, LI P.Stability and Risk Bounds of Iterative Hard Thresholding. IEEE Transactions on Information Theory, 2022, 68(10): 6663-6681. [50] YUAN X T, LI P.Exponential Generalization Bounds with Near-Optimal Rates for Lq-Stable Algorithms[C/OL]. [2023-08-22].https://openreview.net/pdf?id=1_jtWjhSSkr. [51] LAN G H.First-Order and Stochastic Optimization Methods for Machine Learning. Berlin, Germany: Springer, 2020. [52] HAZAN E.Introduction to Online Convex Optimization. Cambri-dge, USA: MIT Press, 2022. [53] BOTTOU L, CURTIS F E, NOCEDAL J.Optimization Methods for Large-Scale Machine Learning. SIAM Review, 2018, 60(2): 223-311. [54] BUBECK S.Convex Optimization: Algorithms and Complexity. Foun-dations and Trends? in Machine Learning, 2015, 8(3/4): 231-357. [55] HARVEY N J A, LIAW C, PLAN Y, et al. Tight Analyses for Non-smooth Stochastic Gradient Descent[C/OL].[2023-08-22]. https://arxiv.org/abs/1812.05217. [56] HARDT M, RECHT B, SINGER Y. Train Faster, Generalize Be-tter: Stability of Stochastic Gradient Descent // Proc of the 33rd International Conference on Machine Learning. San Diego, USA: JMLR, 2016: 1225-1234. [57] ASI H, DUCHI J C.The Importance of Better Models in Stochastic Optimization. Proceedings of the National Academy of Sciences of the United States of America, 2019, 116(46): 22924-22930. [58] TOULIS P, AIROLDI E M.Asymptotic and Finite-Sample Properties of Estimators Based on Stochastic Gradients. Annals of Statistics, 2017, 45(4): 1694-1727. [59] ZHOU P, YUAN X T.Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization[C/OL]. [2023-08-22].https://arxiv.org/abs/2009.09835. [60] BECK A, TEBOULLE M.A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal of Imaging Sciences, 2009, 2(1): 183-202. [61] LEE J D, SUN Y K, SAUNDERS M A.Proximal Newton-Type Methods for Minimizing Composite Functions. SIAM Journal on Optimization, 2014, 24(3): 1420-1443. [62] SHALEV-SHWARTZ S, ZHANG T.Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization. Mathematical Programming, 2016, 55: 105-145. [63] DONOHO D L. Compressed Sensing. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306. [64] ZHANG C H.Nearly Unbiased Variable Selection under Minimax Concave Penalty. The Annals of Statistics, 2010, 38(2): 894-942. [65] BECK A, ELDAR Y C.Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms. SIAM Journal on Optimization, 2012, 23(3): 1480-1509. [66] LU Z S.Iterative Hard Thresholding Methods for l0 Regularized Convex Cone Programming. Mathematical Programming, 2012, 147(1/2): 125-154. [67] YUAN X T, LIU B, WANG L Z, et al.Dual Iterative Hard Thresholding. Journal of Machine Learning Research, 2020, 21(1): 6051-6100. [68] VAN DER MALSBURG C. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms // PALM G, AERTSEN A, eds. Brain Theory. Berlin, Germany: Springer, 1986: 245-248. [69] ZINKEVICH M.Online Convex Programming and Generalized Infinitesimal Gradient Ascent // Proc of the 20th International Conference on Machine Learning. San Diego, USA: JMLR, 2003: 928-935. [70] FREUND Y, SCHAPIRE R E.A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences, 1997, 55(1): 119-139. [71] ZHANG T.Solving Large Scale Linear Prediction Problems Using Stochastic Gradient Descent Algorithms // Proc of the 1st International Conference on Machine Learning. San Diego, USA: JMLR, 2004. DOI: 10.1145/1015330.1015332. [72] MOKHTARI A, SHAHRAMPOUR S, JADBABAIE A, et al.Online optimization in Dynamic Environments: Improved Regret Rates for Strongly Convex Problems // Proc of the 55th IEEE Conference on Decision and Control. Washington, USA: IEEE. 2016: 7195-7201. [73] BUBECK S, CESA-BIANCHI N.Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Foundations and Trends? in Machine Learning, 2012, 5(1). DOI: 10.1561/2200000024. [74] SHALEV-SHWARTZ S.Online Learning and Online Convex Optimization. Foundations and Trends? in Machine Learning, 2012, 4(2): 107-194. [75] THRUN S.Lifelong Learning Algorithms // THRUN S, PRATT L, eds. Learning to Learn. Berlin, Germany: Springer. 1998: 181-209. [76] KIRKPATRICK J, PASCANU R, RABINOWITZ N, et al.Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114(13): 3521-3526. [77] BALCAN M F, BLUM A, VEMPALA S. Efficient Representations for Lifelong Learning and Autoencoding[C/OL]. [2023-08-22].https://arxiv.org/pdf/1411.1490.pdf. [78] ALQUIER P, MAI T T, PONTIL M.Regret Bounds for Lifelong Learning // Proc of the 20th International Conference on Artificial Intelligence and Statistics . San Diego, USA: JMLR, 2017: 261-269. [79] WU Y S, WANG P A, LU C J.Lifelong Optimization with Low Regret // Proc of the 23rd International Conference on Artificial Intelligence and Statistics. San Diego, USA: JMLR, 2019: 448-456. [80] SHAMIR O, SREBRO N, ZHANG T. Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Me-thod // Proc of the 31st International Conference on Machine Learning. San Diego, USA: JMLR, 2014: 1000-1008. [81] BOYD S, PARIKH N, CHU E, et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends? in Machine Learning, 2011, 3(1). DOI: 10.1561/2200000016. [82] ZHANG Y C, LIN X.DiSCO: Distributed Optimization for Self-Concordant Empirical Loss // Proc of the 32nd International Conference on Machine Learning. San Diego, USA: JMLR, 2015: 362-370. [83] LEE J D, LIN Q H, MA T Y, et al.Distributed Stochastic Variance Reduced Gradient Methods by Sampling Extra Data with Replacement. Journal of Machine Learning Research, 2017, 18(1): 4404-4446. [84] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-Efficient Learning of Deep Networks from Decentralized Data[C/OL].[2023-08-22]. https://arxiv.org/abs/1602.05629. [85] LI T, SAHU A K, ZAHEER M, et al. Federated Optimization in Heterogeneous Networks[C/OL].[2023-08-22]. https://arxiv.org/abs/1812.06127. [86] YUAN X T, LI P.On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-Smoothness and Beyond[C/OL]. [2023-08-22].https://arxiv.org/abs/2206.05187. [87] CHEN F, LUO M, DONG Z H, et al. Federated Meta-Learning with Fast Convergence and Efficient Communication[C/OL].[2023-08-22]. https://arxiv.org/pdf/1802.07876.pdf. [88] ZHU Z D, HONG J Y, ZHOU J Y. Data-Free Knowledge Distillation for Heterogeneous Federated Learning[C/OL]. [2023-08-22].https://arxiv.org/pdf/2105.10056.pdf. [89] FALLAH A, MOKHTARI A, OZDAGLAR A.Personalized Fede-rated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 3557-3568. [90] DINH C T, TRAN N H, NGUYEN J D.Personalized Federated Learning with Moreau Envelopes // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 21394-21405. [91] LI T, HU S Y, BEIRAMI A, et al. DITTO: Fair and Robust Fede-rated Learning through Personalization[C/OL].[2023-08-22]. https://arxiv.org/abs/2012.04221. [92] GOODFELLOW I J, SHLENS J, SZEGEDY C.Explaining and Har-nessing Adversarial Examples[C/OL]. [2023-08-22].https://arxiv.org/abs/1412.6572. [93] BIGGIO B, ROLI F.Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning. Pattern Recognition, 2018, 84: 317-331. [94] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards Deep Learning Models Resistant to Adversarial Attacks[C/OL].[2023-08-22]. https://arxiv.org/pdf/1706.06083v4.pdf. [95] WONG E, RICE L, KOLTER J Z.Fast Is Better Than Free: Revisiting Adversarial Training[C/OL]. [2023-08-22].https://arxiv.org/pdf/2001.03994.pdf. [96] WU D X, XIA S T, WANG Y S.Adversarial Weight Perturbation Helps Robust Generalization // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 2958-2969. [97] PANG T Y, YANG X, DONG Y P, et al. Bag of Tricks for Adversarial Training[C/OL].[2023-08-22]. https://arxiv.org/abs/2010.00467v3. [98] COHEN J, ROSENFELD E, KOLTER J Z.Certified Adversarial Robustness via Randomized Smoothing[C/OL]. [2023-08-22].https://arxiv.org/pdf/1902.02918.pdf. [99] ZHANG B H, JIANG D, HE D, et al. Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective[C/OL].[2023-08-22]. https://arxiv.org/abs/2210.01787. [100] CHENG Z, ZHU F, ZHANG X Y,et al. Adversarial Training with Distribution Normalization and Margin Balance. Pattern Re-cognition, 2023, 136. DOI: 10.1016/j.patcog.2022.109182. [101] LI X C, ZHANG X Y, YIN F, et al.Decision-Based Adversarial Attack with Frequency Mixup. IEEE Transactions on Information Forensics and Security, 2022, 17: 1038-1052. [102] NIE W, GUO B, HUANG Y J, et al. Diffusion Models for Adversarial Purification[C/OL].[2023-08-22]. https://arxiv.org/pdf/2205.07460v1.pdf. [103] HENDRYCKS D, DIETTERICH T G.Benchmarking Neural Network Robustness to Common Corruptions and Perturbations[C/OL]. [2023-08-22].https://arxiv.org/pdf/1807.01697.pdf. [104] HENDRYCKS D, MU N, CUBUK E D, et al. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty[C/OL].[2023-08-22]. https://arxiv.org/pdf/1912.02781.pdf. [105] WANG H T, XIAO C W, KOSSAIFI J, et al. AugMax: Adversarial Composition of Random Augmentations for Robust Training[C/OL].[2023-08-22]. https://arxiv.org/pdf/2110.13771v3.pdf. [106] KIREEV K, ANDRIUSHCHENKO M, FLAMMARION N.On the Effectiveness of Adversarial Training Against Common Corruptions[C/OL]. [2023-08-22].https://arxiv.org/abs/2103.02325v1. [107] FORD N, GILMER J, CARLINI N, et al. Adversarial Examples Are a Natural Consequence of Test Error in Noise[C/OL].[2023-08-22]. https://arxiv.org/pdf/1901.10513v1.pdf. [108] RICE L, BAIR A, ZHANG H, et al. Robustness between the Worst and Average Case[C/OL].[2023-08-22]. https://papers.nips.cc/paper_files/paper/2021/file/ea4c796cccfc3899b5f9ae2874237c20-Paper.pdf. [109] ROBEY A, CHAMON L F O, PAPPAS G J, et al. Probabilistically Robust Learning: Balancing Average and Worst-Case Performance[C/OL].[2023-08-22]. https://arxiv.org/pdf/2202.01136.pdf. [110] BIGGIO B, NELSON B, LASKOV P.Poisoning Attacks Against Support Vector Machines // Proc of the 29th International Confe-rence on Machine Learning. San Diego, USA: JMLR, 2012: 1467-1474. [111] FOWL L, GOLDBLUM M, CHIANG P Y, et al. Adversarial Examples Make Strong Poisons[C/OL].[2023-08-22]. https://arxiv.org/pdf/2106.10807.pdf. [112] CARLINI N.Poisoning the Unlabeled Dataset of Semi-Supervised Learning[C/OL]. [2023-08-22].https://arxiv.org/abs/2105.01622. [113] HE H, ZHA K W, KATABI D.Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning[C/OL]. [2023-08-22].https://arxiv.org/pdf/2202.11202.pdf. [114] TAO L, FENG L, YI J F, et al. Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training[C/OL].[2023-08-22]. https://openreview.net/pdf?id=I39u89067j. [115] CARLINI N, TERZIS A.Poisoning and Backdooring Contrastive Learning[C/OL]. [2023-08-22].https://arxiv.org/pdf/2106.09667.pdf. [116] HAYASE J, KONG W H, SOMANI R, et al. SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics[C/OL].[2023-08-22]. https://arxiv.org/pdf/2104.11315.pdf. [117] SCHEIRER W J, DE REZENDE ROCHA A, SAPKOTA A, et al. Toward Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(7): 1757-1772. [118] RUDD E M, JAIN L P, SCHEIRER W J, et al.The Extreme Value Machine. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 762-768. [119] HU B G, HE R, YUAN X T.Information-Theoretic Measures for Objective Evaluation of Classifications. Acta Automatica Sinica, 2012, 38(7): 1169-1182. [120] JI F F, CHEN Y P, LIU L Q, et al.Cross-Domain Few-Shot Classification via Dense-Sparse-Dense Regularization. IEEE Tran-sactions on Circuits and Systems for Video Technology, 2023. DOI: 10.1109/TCSVT.2023.3294332. [121] ZHOU P, XIONG C M, YUAN X T, et al. A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning[C/OL].[2023-08-22]. https://arxiv.org/abs/2106.14749. [122] LI X C, XIA X B, ZHU F, et al.Dynamic-Aware Loss for Lear-ning with Label Noise. Pattern Recognition, 2023, 144. DOI: 10.1016/j.patcog.2023.109835. [123] YUAN X T, LI P. Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation[C/OL]. [2023-08-22]. http://export.arxiv.org/abs/2301.03125. |
|
|
|