|
|
Survey of Metric-Based Few-Shot Classification |
LIU Xin1,2, ZHOU Kairui1,2, He Yulin3, JING Liping1,2, YU Jian1,2 |
1. Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing 100044 2. School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044 3. Beijing Newlink Technology Co., Ltd., Beijing 100083 |
|
|
Abstract Few-shot learning aims to make machines recognize and summarize things by learning from a small number of samples like humans. The metric-based few-shot learning method is designed to learn a low-dimensional embedding space and query samples can be classified based on a distance between the query samples and the class embeddings in this space. In this paper, the key issues, class representation learning and similarity learning , are discussed to sort out the relevant literature. Only metric-based few-shot learning methods are classified in a detailed and comprehensive way, and they are classified from the perspective of key issues. Finally, the experimental results of current representative research on commonly used image classification datasets are summarized, the problems of the existing methods are analyzed, and the future research is prospected.
|
Received: 28 April 2021
|
|
Fund:National Natural Science Foundation of China(No.61632004), Fundamental Research Funds for Graduate Student Innovation Project(No.2021YJS031), Beijing Natural Science Foundation(No.Z180006), Fundamental Research Funds for the Central Universities(No.2019JBZ110) |
Corresponding Authors:
JING Liping, Ph.D., professor. Her research interests include machine learning, high-dimensional representation lear-ning and its application in artificial intelligence fields.
|
About author:: LIU Xin, Ph.D. candidate. Her research interests include machine learning, metric learning and few-shot learning. ZHOU Kairui, master student. His research interests include machine learning and few-shot learning. HE Yulin, Ph.D. His research interests include enterprise architecture design, high-performance distributed trading system, offline and real-time big data analysis. YU Jian, Ph.D., professor. His research interests include artificial intelligence and machine learning. |
|
|
|
[1] LEE J G, KANG M. Geospatial Big Data: Challenges and Opportunities. Big Data Research, 2015, 2(2): 74-81. [2] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778. [3] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1409.1556.pdf. [4] SUN Y, CHEN Y H, WANG X G, et al. Deep Learning Face Re-presentation by Joint Identifcation-Verifcation // Proc of the 27th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2014: 1988-1996. [5] TAIGMAN Y, YANG M, RANZATO M A, et al. Deepface: Clo-sing the Gap to Human-Level Performance in Face Verification // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2014: 1701-1708. [6] 杨 军,刘妍丽.基于图像的单样本人脸识别研究进展.西华大学学报(自然科学版), 2014, 33(4): 1-5, 10. (YANG J, LIU Y L. The Latest Advances in Face Recognition with Single Training Sample. Journal of Xihua University(Natural Science), 2014, 33(4): 1-5, 10.) [7] MANNING C, SCHUTZE H. Foundations of Statistical Natural Language Processing. Cambridge, USA: The MIT Press, 1999. [8] SNELL J, SWERSKY K, ZEMEL R S. Prototypical Networks for Few-Shot Learning // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2017: 4080-4090. [9] SUNG F, YANG Y X, ZHANG L, et al. Learning to Compare: Relation Network for Few-Shot Learning // Proc of the IEEE Confe-rence on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 1199-1208. [10] CHEN W Y, LIU Y C, KIRA Z, et al. A Closer Look at Few-Shot Classification // Proc of the International Conference on Learning Representations[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=HkxLXnAcFQ. [11] ZHANG C, CAI Y J, LIN G S, et al. DeepEMD: Few-Shot Image Classification with Differentiable Earth Mover's Distance and Structured Classifiers // Proc of the IEEE/CVF Conference on Compu-ter Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 12203-12213. [12] RIOS A, KAVULURU R. Few-Shot and Zero-Shot Multi-label Lear-ning for Structured Label Spaces // Proc of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: ACL, 2018: 3132-3142. [13] GENG R Y, LI B H, LI Y B, et al. Induction Networks for Few-Shot Text Classification[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1902.10482.pdf. [14] LYU C, LIU W J, WANG P. Few-Shot Text Classification with Edge-Labeling Graph Neural Network-Based Prototypical Network // Proc of the 28th International Conference on Computational Linguistics. New York, USA: ACM, 2020: 5547-5552. [15] TANNER M A, WONG W H. The Calculation of Posterior Distributions by Data Augmentation. Journal of the American Statistical Association, 1987, 82(398): 528-540. [16] PAN S J, YANG Q. A Survey on Transfer Learning. IEEE Tran-sactions on Knowledge and Data Engineering, 2010, 22(10): 1345-1359. [17] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-Lear-ning with Memory-Augmented Neural Networks // Proc of the 33rd International Conference on Machine Learning. Cambridge, USA: The MIT Press, 2016: 1842-1850. [18] QI H, BROWN M, LOWE D G. Low-Shot Learning with Imprinted Weights // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 5822-5830. [19] ZHANG Y B, TANG H, JIA K. Fine-Grained Visual Categorization Using Meta-Learning Optimization with Sample Selection of Auxiliary Data // Proc of the European Conference on Computer Vision. Berlin, German: Springer ,2018: 233-248. [20] CUBUK E D, ZOPH B, MANE D, et al. Autoaugment: Learning Augmentation Policies from Data[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1805.09501v3.pdf. [21] RAVI S, LAROCHELLE H. Optimization as a Model for Few-Shot Learning // Proc of the International Conference on Learning Re-presentations[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=rJY0-Kcll. [22] FINN C, ABBEEL P, LEVINE S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks // Proc of the 34th International Conference on Machine Learning. New York, USA: ACM 2017: 1126-1135. [23] SANTORO A, BARTUNOV S, BOTVINICK M, et al. One-Shot Learning with Memory-Augmented Neural Networks[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1605.06065.pdf. [24] LI Z G, ZHOU F W, CHEN F, et al. Meta-SGD: Learning to Learn Quickly for Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1707.09835.pdf. [25] ZHOU F W, WU B, LI Z G. Deep Meta-Learning: Learning to Learn in the Concept Space[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1802.03596.pdf. [26] SCHWARTZ E, KARLINSKY L, SHTOK J, et al. Delta-Encoder: An Effective Sample Synthesis Method for Few-Shot Object Reco-gnition // Proc of the 32nd International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2018. 2850-2860. [27] JAMAL M A, QI G J. Task Agnostic Meta-Learning for Few-Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 11711-11719. [28] SUN Q R, LIU Y Y, CHUA T S, et al. Meta-Transfer Learning for Few-Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 403-412. [29] YOON J, KIM T, DIA O, et al. Bayesian Model-Agnostic Meta-Learning // Proc of the 32nd International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2018: 7343-7353. [30] BERTINETTO L, HENRIQUES J F, VALMADRE J, et al. Lear-ning Feed-Forward One-Shot Learners // Proc of the 30th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2016: 523-531. [31] COLLINS L, MOKHTARI A, SHAKKOTTAI S. Task-Robust Mo-del-Agnostic Meta-Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2002.04766v2.pdf. [32] HOU R B, CHANG H, MA B P, et al. Cross Attention Network for Few-Shot Classification // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2019: 4003-4014. [33] XING C, ROSTAMZADEH N, ORESHKIN B N, et al. Adaptive Cross-Modal Few-Shot Learning // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2019: 4847-4857. [34] CAO C Q, ZHANG Y N. Learning to Compare Relation: Semantic Alignment for Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2003.00210v1.pdf. [35] LI W B, WANG L, XU J L, et al. Revisiting Local Descriptor Based Image-to-Class Measure for Few-Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 7253-7260. [36] ORESHKIN B N, RODRIGUEZ P, LACOSTE A. TADAM: Task Dependent Adaptive Metric for Improved Few-Shot Learning // Proc of the 32nd International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2018: 719-729. [37] BATENI P, GOYAL R, MASRANI V, et al. Improved Few-Shot Visual Classification // Proc of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 14481-14490. [38] WANG X, YU F, WANG R, et al. TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 1831-1840. [39] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching Networks for One Shot Learning // Proc of the 30th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press,2016: 3637-3645. [40] HAO F S, HE F X, CHENG J, et al. Collect and Select: Semantic Alignment Metric Learning for Few-Shot Learning // Proc of the IEEE/CVF International Conference on Computer Vision. Wa-shington, USA: IEEE, 2019: 8459-8468. [41] LU J, GONG P, YE J P, et al. Learning from Very Few Samples: A Survey[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2009.02653v1.pdf. [42] BENDRE N, MARÍN H T, NAJAFIRAD P. Learning from Few Samples: A Survey[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2007.15484.pdf. [43] WANG Y Q, YAO Q M, KWOK J T, et al. Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Computing Surveys, 2020, 53(3): 1-34. [44] 赵凯琳,靳小龙,王元卓.小样本学习研究综述.软件学报, 2021, 32(2): 349-369. (ZHAO K L, JIN X L, WANG Y Z. Survey on Few-Shot Lear-ning. Journal of Software, 2021, 32(2): 349-369.) [45] 于 剑.机器学习:从公理到算法.北京:清华大学出版社, 2017. (YU J. Machine Learning: From Axioms to Algorithms. Beijing, China: Tsinghua University Press, 2017.) [46] WITTGENSTEIN L. Philosophical Investigations. New York, USA: John Wiley & Sons, 2010. [47] MURPHY G L. The Big Book of Concepts. Cambridge, USA: The MIT Press, 2004. [48] ZHENG Y, WANG R G, YANG J, et al. Principal Characteristic Networks for Few-Shot Learning. Journal of Visual Communication and Image Representation, 2019, 59: 563-573. [49] REN M Y, TRIANTAFILLOU E, RAVI S, et al. Meta-Learning for Semi-supervised Few-Shot Classification[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1803.00676.pdf. [50] LIU J L, SONG L, QIN Y Q. Prototype Rectification for Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1911.10713.pdf. [51] FORTIN M P, CHAIB-DRAA B. Few-Shot Learning with Contextual Cueing for Object Recognition in Complex Scenes[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1912.06679.pdf. [52] ALLEN K R, SHELHAMER E, SHIN H, et al. Infinite Mixture Prototypes for Few-Shot Learning // Proc of the 36th International Conference on Machine Learning. New York, USA: ACM, 2019: 232-241. [53] DE VRIES H, STRUB F, MARY J, et al. Modulating Early Vi-sual Processing by Language // Proc of the 31st International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2017: 6597-6607. [54] PEREZ E, STRUB F, DE VRIES H, et al. FiLM: Visual Reaso-ning with a General Conditioning Layer[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1709.07871.pdf. [55] LI H Y, EIGEN D, DODGE S, et al. Finding Task-Relevant Features for Few-Shot Learning by Category Traversal // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019. DOI: 10.1109/CVPR.2019.00009. [56] YE H J, HU H X, ZHAN D C, et al. Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 8805-8814. [57] WU Z Y, LI Y W, GUO L H, et al. PARN: Position-Aware Relation Networks for Few-Shot Learning // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2019: 6659-6667. [58] GARCIA V, BRUNA J. Few-Shot Learning with Graph Neural Net-works[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1711.04043.pdf. [59] KIM J, KIM T, KIM S, et al. Edge-Labeling Graph Neural Network for Few-Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 11-20 [60] CHENG H, ZHOU J T, TAY W P, et al. Attentive Graph Neural Networks for Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2007.06878.pdf. [61] SHI X H, SALEWSKI L, SCHIEGG M, et al. Relational Genera-lized Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1907.09557.pdf. [62] ZHANG J H, ZHANG M L, LU Z W, et al. ADARGCN: Adaptive Aggregation GCN for Few-Shot Learning // Proc of the IEEE Winter Conference on Applications of Computer Vision. Washington, USA: IEEE, 2021: 3482-3491. [63] ZHANG X, YU F X, KARAMAN S, et al. Heated-up Softmax Embedding[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1809.04157.pdf. [64] LI A X, HUANG W R, LAN X, et al. Boosting Few-Shot Lear-ning with Adaptive Margin Loss // Proc of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2020: 12576-12584. [65] LI X M, YU L Q, FU C W, et al. Revisiting Metric Learning for Few-Shot Image Classification. Neurocomputing, 2020, 406: 49-58. [66] SOHN K. Improved Deep Metric Learning with Multi-class n-Pair Loss Objective // Proc of the 30th International Conference on Neural Information Processing Systems. Cambridge, USA: The MIT Press, 2016: 1857-1865. [67] LIU W Y, WEN Y D, YU Z D, et al. Large-Margin Softmax Loss for Convolutional Neural Networks // Proc of the 33rd International Conference on Machine Learning. New York, USA: ACM, 2016: 507-516. [68] LIU W Y, WEN Y D, YU Z D, et al. SphereFace: Deep Hypersphere Embedding for Face Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 212-220. [69] WANG F, CHENG J, LIU W Y, et al. Additive Margin Softmax for Face Verification. IEEE Signal Processing Letters, 2018, 25(7): 926-930. [70] WANG H, WANG Y T, ZHOU Z, et al. CosFace: Large Margin Cosine Loss for Deep Face Recognition // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 5265-5274. [71] DENG J K, GUO J, XUE N N, et al. ArcFace: Additive Angular Margin Loss for Deep Face Recognition // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 4685-4694. [72] LIU H, ZHU X Y, LEI Z, et al. AdaptiveFace: Adaptive Margin and Sampling for Face Recognition // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 11939-11948. [73] SCHROFF F, KALENICHENKO D, PHILBIN J. FaceNet: A Unified Embedding for Face Recognition and Clustering // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2015: 815-823. [74] SONG H O, XIANG Y, JEGELKA S, et al. Deep Metric Learning via Lifted Structured Feature Embedding // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 4004-4012. [75] GE W F, HUANG W L, DONG D K, et al. Deep Metric Learning with Hierarchical Triplet Loss // Proc of the European Conference on Computer Vision. Berlin, German: Springer, 2018: 272-288. [76] ZHENG W Z, LU J W, ZHOU J. Hardness-Aware Deep Metric Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(9): 3214-3228. [77] WANG X S, HUA Y, KODIROV E, et al. Ranked List Loss for Deep Metric Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 5202-5211. [78] SU J C, MAJI S, HARIHARAN B. Boosting Supervision with Self-supervision for Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1906.07079v1.pdf. [79] LI H N, TAO R S, LI J, et al. Multi-pretext Attention Network for Few-shot Learning with Self-supervision[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2103.05985.pdf. [80] LAKE B M, SALAKHUTDINOV R, TENENBAUM J B. Human-Level Concept Learning through Probabilistic Program Induction. Science, 2015, 350(6266): 1332-1338. [81] MALALUR P, JAAKKOLA T. Alignment Based Matching Networks for One-Shot Classification and Open-Set Recognition[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1903.06538.pdf. [82] CUI Y, ZHOU F, LIN Y Q, et al. Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 1153-1162 [83] RIZVE M N, KHAN S, KHAN F S, et al. Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2021: 10836-10846. [84] GIDARIS S, BURSUC A, KOMODAKIS N, et al. Boosting Few-Shot Visual Learning with Self-Supervision // Proc of the IEEE/CVF International Conference on Computer Vision and Pattern Re-cognition. Washington, USA: IEEE, 2019: 8058-8067. [85] PHOO C P, HARIHARAN B. Self-training for Few-Shot Transfer across Extreme Task Differences[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=O3Y56aqpChA. [86] FEI N Y, LU Z W, XIANG T, et al. MELR: Meta-Learning via Modeling Episode-Level Relationships for Few-Shot Learning[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=D3PcGLdMx0. [87] ZHANG M L, ZHANG J H, LU Z W, et al. IEPT: Instance-Level and Episode-Level Pretext Tasks for Few-Shot Learning[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=xzqLpqRzxLq. [88] YUE Z Q, ZHANG H W, SUN Q R, et al. Interventional Few-Shot Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2009.13000.pdf. [89] MAURER A. Algorithmic Stability and Meta-Learning. Journal of Machine Learning Research, 2005, 6: 967-994. [90] WANG H X, SUN R Y, LI B. Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2006.14606.pdf. [91] COLLINS L, MOKHTARI A, SHAKKOTTAI S. Task-Robust Mo-del-Agnostic Meta-Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2002.04766.pdf. [92] DENEVI G, PONTIL M, CILIBERTO C. The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning[C/OL]. [2021-03-21]. https://export.arxiv.org/pdf/2008.10857. [93] GAO K, SENER O. Modeling and Optimization Trade-Off in Meta-Learning[C/OL]. [2021-03-21]. https://arxiv.org/pdf/2010.12916.pdf. [94] CAO T S, LAW M T, FIDLER S. A Theoretical Analysis of the Number of Shots in Few-Shot Learning[C/OL]. [2021-03-21]. https://openreview.net/pdf?id=HkgB2TNYPS. [95] RAGHU A, RAGHU M, BENGIO S, et al. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of Maml[C/OL]. [2021-03-21]. https://arxiv.org/pdf/1909.09157v1.pdf. |
|
|
|