[1] SHVAI N, CARMONA A L, NAKIB A. Adaptive Image Anonymization in the Context of Image Classification with Neural Networks // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2023: 5051-5060.
[2] CHRISTENSEN A, MANCINI M, KOEPKE A S, et al. Image-Free Classifier Injection for Zero-Shot Classification // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2023: 19026-19035.
[3] LING Y, YU J F, XIA R. Vision-Language Pre-training for Multimodal Aspect-Based Sentiment Analysis // Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2022, I: 2149-2159.
[4] LIANG Y X, XIA Y T, KE S Y, et al. AirFormer: Predicting Nationwide Air Quality in China with Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(12): 14329-14337.
[5] GORISHNIY Y, RUBACHEV I, KHRULKOV V, et al. Revisiting Deep Learning Models for Tabular Data[C/OL].[2024-03-09]. https://arxiv.org/pdf/2106.11959.
[6] HUANG X, KHETAN A, CVITKOVIC M, et al. TabTransformer: Tabular Data Modeling Using Contextual Embeddings[C/OL].[2024-03-09]. https://arxiv.org/pdf/2012.06678.pdf.
[7] ARIK S Ö, PFISTER T. TabNet: Attentive Interpretable Tabular Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(8): 6679-6687.
[8] ALTAE-TRAN H, RAMSUNDAR B, PAPPU A S, et al. Low Data Drug Discovery with One-Shot Learning. ACS Central Science, 2017, 3(4): 283-293.
[9] JANAKIRAMAIAH B, KALYANI G, KARUNA A, et al. Military Object Detection in Defense Using Multi-level Capsule Networks. Soft Computing, 2023, 27(2): 1045-1059.
[10] WANG Z Q, LI M, OU H W, et al. A Few-Shot Malicious Encrypted Traffic Detection Approach Based on Model-Agnostic Meta-Learning. Security and Communication Networks, 2023. DOI: 10.1155/2023/3629831.
[11] SHORTEN C, KHOSHGOFTAAR T M. A Survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 2019, 6(1). DOI: 10.1186/s40537-019-0197-0.
[12] BAYER M, KAUFHOLD M A, REUTER C. A Survey on Data Augmentation for Text Classification. ACM Computing Surveys, 2023, 55(7). DOI: 10.1145/3544558.
[13] FANG J P, TANG C Z, CUI Q, et al. Semi-supervised Learning with Data Augmentation for Tabular Data // Proc of the 31st ACM International Conference on Information and Knowledge Management. New York, USA: ACM, 2022: 3928-3932.
[14] BATISTA G E A P A, PRATI R C, MONARD M C. A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data. ACM SIGKDD Explorations Newsletter, 2004, 6(1): 20-29.
[15] CHAPELLE O, WESTON J, BOTTOU L, et al. Vicinal Risk Mi-nimization // Proc of the 13th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2000: 395-401.
[16] BORISOV V, LEEMANN T, SEβLER K, et al. Deep Neural Networks and Tabular Data: A Survey. IEEE Transactions on Neural Networks and Learning Systems, 2024. DOI: 10.1109/TNNLS.2022.3229161.
[17] KINGMA D P, WELLING M. Auto-Encoding Variational Bayes[C/OL]. [2024-03-09]. https://arxiv.org/pdf/1312.6114.pdf.
[18] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Nets // Proc of the 27th International Confe-rence on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2014, II: 2672-2680.
[19] SONG Y, ERMON S. Generative Modeling by Estimating Gradients of the Data Distribution // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2019: 11918-11930.
[20] HO J, JAIN A, ABBEEL P. Denoising Diffusion Probabilistic Mo-dels // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2020: 6840-6851.
[21] SONG Y, SOHL-DICKSTEIN J, KINGMA D P, et al. Score-Based Generative Modeling through Stochastic Differential Equations[C/OL].[2024-03-09]. https://arxiv.org/pdf/2011.13456.pdf.
[22] SONG J M, MENG C L, ERMON S. Denoising Diffusion Implicit Models[C/OL]. [2024-03-09]. https://arxiv.org/pdf/2010.02502.pdf.
[23] NICHOL A Q, DHARIWAL P. Improved Denoising Diffusion Pro-babilistic Models. Proceedings of Machine Learning Research, 2021, 139: 8162-8171.
[24] KOTELNIKOV A, BARANCHUK D, RUBACHEV I, et al. TabDDPM: Modelling Tabular Data with Diffusion Models. Proceedings of Machine Learning Research, 2023, 202: 17564-17579.
[25] KIM J, LEE C, PARK N. STaSy: Score-Based Tabular Data Synthesis[C/OL]. [2024-03-09]. https://arxiv.org/pdf/2210.04018.pdf.
[26] LEE C, KIM J, PARK N. CoDi: Co-evolving Contrastive Diffusion Models for Mixed-Type Tabular Synthesis. Proceedings of the Machine Learning Research, 2023, 202: 18940-18956.
[27] ZHANG H K, ZHANG J N, SRINIVASAN R, et al. Mixed-Type Tabular Data Synthesis with Score-Based Diffusion in Latent Space[C/OL].[2024-03-09]. https://arxiv.org/pdf/2310.09656.pdf.
[28] KANG G L, DONG X Y, ZHENG L, et al. PatchShuffle Regularization[C/OL].[2024-03-09]. https://arxiv.org/pdf/1707.07103.pdf.
[29] YOU Z B, ZHONG Y, BAO F, et al. Diffusion Models and Semi-supervised Learners Benefit Mutually with Few Labels // Proc of the 37th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2023: 43479-43495.
[30] WAN Z H, WAN X J, WANG W G. Improving Grammatical Error Correction with Data Augmentation by Editing Latent Representation // Proc of the 28th International Conference on Computational Linguistics. Stroudsburg, USA: ACL, 2020: 2202-2212.
[31] LIU D Y H, GONG Y Y, FU J, et al. Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Conti-nuous Space // Proc of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: ACL, 2020: 5798-5810.
[32] GONG S S, LI M K, FENG J T, et al. DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models[C/OL].[2024-03-09]. https://arxiv.org/pdf/2210.08933.pdf.
[33] REID M, HELLENDOORN V J, NEUBIG G. DiffusER: Discrete Diffusion via Edit-Based Reconstruction[C/OL]. [2024-03-09]. https://arxiv.org/pdf/2210.16886.pdf.
[34] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: Synthetic Minority Over-Sampling Technique. Journal of Artificial Intelligence Research, 2002, 16(1): 321-357.
[35] XU L, SKOULARIDOU M, CUESTA-INFANTE A, et al. Mode-ling Tabular Data Using Conditional GAN // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2019: 7335-7345.
[36] OUYANG Y D, XIE L Y, LI C X, et al. MissDiff: Training Diffusion Models on Tabular Data with Missing Values[C/OL].[2024-03-09]. https://arxiv.org/pdf/2307.00467.pdf.
[37] SONG Y, DHARIWAL P, CHEN M, et al. Consistency Models. Proceedings of Machine Learning Research, 2023, 202: 32211-32252.
[38] PAN X R, YE T Z, XIA Z F, et al. Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2023: 2082-2091.
[39] AUSTIN J, JOHNSON D D, HO J, et al. Structured Denoising Diffusion Models in Discrete State-Spaces // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2021: 17981-17993.
[40] DHARIWAL P, NICHOL A. Diffusion Models Beat GANs on Image Synthesis // Proc of the 34th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2021: 8780-8794.
[41] ZHANG Q S, TAO M L, CHEN Y X. gDDIM: Generalized Denoising Diffusion Implicit Models[C/OL].[2024-03-09]. https://arxiv.org/pdf/2206.05564.pdf.
[42] SHEYNIN S, ASHUAL O, POLYAK A, et al. KNN-Diffusion: Image Generation via Large-Scale Retrieval[C/OL].[2024-03-09]. https://arxiv.org/pdf/2204.02849.pdf.
[43] HUANG L K, WEI Y. Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization // Proc of the 36th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2024: 3329-3342. |