模式识别与人工智能
Home      About Journal      Editorial Board      Instructions      Ethics Statement      Contact Us                   中文
Pattern Recognition and Artificial Intelligence
22 Judgement and Disposal of Academic Misconduct Article
22 Copyright Transfer Agreement
22 Proof of Confidentiality
22 Requirements for Electronic Version
More....
22 Chinese Association of Automation
22 National ResearchCenter for Intelligent Computing System
22 Institute of Intelligent Machines,Chinese Academy of Sciences
More....
 
 
2021 Vol.34 Issue.10, Published 2021-10-25

Adaptive Learning for Classification and Clustering   
   
Adaptive Learning for Classification and Clustering
873 Class-Aware Based KNN Classification Method
BIAN Zekang, ZHANG Jin, WANG Shitong
Many conventional classification methods start with the hypothesis that the distribution of training samples is same as or at least similar to that of testing samples. In many practical applications, it is difficult to agree with the above hypothesis. And thus the classification performance of some traditional methods, such as support vector machine, is reduced. Therefore, a class-aware based KNN classification method(CA-KNN) is proposed. A sparse representation model is proposed based on the assumption that any testing sample can be represented sparsely by the training samples. The class label information is utilized effectively by CA-KNN to improve the accuracy of the sparse representation. The idea of nearest neighbor classification of KNN is introduced to improve the generalization capability of CA-KNN . And it is proved in theory that CA-KNN classifier is directly related to Bayes decision rule for the minimum error. The experimental and theoretical results show that CA-KNN generates better classification performance.
2021 Vol. 34 (10): 873-884 [Abstract] ( 535 ) [HTML 1KB] [ PDF 712KB] ( 368 )
885 Maximum Margin of Twin Sphere Model via Combined Smooth Reward-Penalty Loss Function with Lower Bound
KANG Qian, ZHOU Shuisheng
The loss of the correctly classified samples is counted as zero by classical spherical classifier in extremely imbalanced classification. The decision function is constructed only by misclassified samples. In this paper, a smooth reward-penalty loss function with lower bound is proposed. The loss of the correctly classified samples is counted as negative in the proposed loss function. Therefore, the reward of the objective function can be realized and the interference of noise near the boundary can be avoided. Based on maximum margin of twin spheres support vector machine, a maximum margin of twin sphere model via combined reward-penalty loss function with lower bound(RPMMTS) is established. Two concentric spheres are constructed by RPMMTS using Newton's method. The majority samples are captured in the small sphere and the extra space are eliminated at the same time. By increasing the margin between two concentric spheres, the minority samples are pushed out of the large sphere as many as possible. Experimental results show that the proposed loss function makes RPMMTS better than other unbalanced classification algorithms in classification performance.
2021 Vol. 34 (10): 885-897 [Abstract] ( 270 ) [HTML 1KB] [ PDF 700KB] ( 198 )
898 Deep Transfer Active Learning Method Combining Source Domain Difference and Target Domain Uncertainty
LIU Dapeng, CAO Yongfeng, SU Caixia, ZHANG Lun
Training deep neural network models comes with a heavy labeling cost. To reduce the cost, a deep transfer active learning method combining source domain and target domain is proposed. With the initial model transferred from source task, the current task samples with larger contribution to the model performance improvement are labeled by using a dynamical weighting combination of source domain difference and target domain uncertainty. Information extraction ratio(IER) is concretely defined in the specific case. An IER-based batch training strategy and a T&N batch training strategy are proposed to deal with model training process. The proposed method is tested on two cross-dataset transfer learning experiments. The results show that the transfer active learning method achieves good performance and reduces the cost of annotation effectively and the proposed strategies optimize the distribution of computing resources during the active learning process. Thus, the model learns more times from samples in the early phases and less times in the later and end phases.
2021 Vol. 34 (10): 898-908 [Abstract] ( 300 ) [HTML 1KB] [ PDF 1906KB] ( 203 )
909 Survey of Metric-Based Few-Shot Classification
LIU Xin, ZHOU Kairui, He Yulin, JING Liping, YU Jian
Few-shot learning aims to make machines recognize and summarize things by learning from a small number of samples like humans. The metric-based few-shot learning method is designed to learn a low-dimensional embedding space and query samples can be classified based on a distance between the query samples and the class embeddings in this space. In this paper, the key issues, class representation learning and similarity learning , are discussed to sort out the relevant literature. Only metric-based few-shot learning methods are classified in a detailed and comprehensive way, and they are classified from the perspective of key issues. Finally, the experimental results of current representative research on commonly used image classification datasets are summarized, the problems of the existing methods are analyzed, and the future research is prospected.
2021 Vol. 34 (10): 909-923 [Abstract] ( 698 ) [HTML 1KB] [ PDF 2095KB] ( 358 )
924 Dynamic Parameter Setting Method for Domain Adaptation
ZHANG Yuhong, YU Daoyuan, HU Xuegang
The performance of domain adaptation methods for different tasks is unstable due to its static weight settings for multiple measures during feature shift process. Therefore, a dynamic parameter setting method for domain adaption is proposed. Reproducing Kernel Hilbert space is introduced to learn the invariant space by minimizing the distance between both domains according to the discriminative joint probability distribution. In this process, A-distance is employed to measure the discrepancy ratio of the same labels to the different labels, and this ratio is utilized to adjust the proportion of transferability and discriminability distributions dynamically. With this dynamic parameter settings, better performance is obtained. Experimental results on three image classification datasets show the effectiveness of the proposed method.
2021 Vol. 34 (10): 924-931 [Abstract] ( 321 ) [HTML 1KB] [ PDF 723KB] ( 243 )
932 Discriminative Joint Matching for Unsupervised Domain Adaptation
ZHANG Yong, XIA Tianqi, HUANG Dan
The transfer effect of domain adaption is poor due to the large differences between domains. It can be improved by reducing the domain difference. However, the discriminability of later classification is ignored. A discriminative joint matching algorithm is proposed to handle this problem. Differentiation treatments are conducted according to different categories between domains. Feature matching and instance reweighting are combined to improve the migration effect. The joint probability distribution is employed to measure the difference of data distribution between domains. The transferability is enhanced by reducing the distance between the same domains. The discriminability is improved by expanding the distance between different domains. Feature matching and instance weighting are combined in the process of feature dimensionality reduction to jointly construct a feature transformation matrix. The experimental results show that the classification result of the proposed algorithm on 18 tasks is better.
2021 Vol. 34 (10): 932-940 [Abstract] ( 388 ) [HTML 1KB] [ PDF 700KB] ( 171 )
941 Fast Few-Shot Learning Algorithm Based on Deep Network
DAI Leichao, FENG Lin, SHANG Xinglin, SU Han, GONG Xun
The cognitive process of the few-shot learning method simulating human learning from a small number of samples is one of the hotspots in the machine learning field. To solve the problems of large task volume and serious overfitting in the iterative process of the current few-shot learning methods, a fast few-shot learning algorithm based on deep network is proposed. Firstly, the kernel density estimation and image filtering methods are utilized to add multiple types of random noise to the training set to generate support sets and query sets. Then, the prototype network is applied to extract the image features of the support set and query set. According to the Bregman divergence, the center point of the support sample of each type of support set is employed as the class prototype. Then, the L2 norm is utilized to measure the distance between the support set and the query image. Multiple heterogeneous base classifiers are generated using cross-entropy feedback loss. Finally, the voting mechanism is introduced to fuse the nonlinear classification results of the base classifiers. Experiments show that the proposed algorithm speeds up the convergence of few-shot learning with higher classification accuracy and strong robustness.
2021 Vol. 34 (10): 941-956 [Abstract] ( 416 ) [HTML 1KB] [ PDF 1165KB] ( 373 )
957 Adaptive Rulkov Neuron Clustering Algorithm
LIAO Yunrong, REN Haipeng
Aiming at the clustering of sample datasets with small inter-class distance and poor separability, an adaptive Rulkov neuron clustering algorithm is proposed. Firstly, a similarity matrix based on adaptive distance and shared nearest neighbor is constructed. Secondly, the optimal segmentation of the undirected graph consisting of samples is replaced by the Laplace spectral decomposition of the matrix according to the similarity matrix, and the eigen vectors of Laplacian matrix with larger eigen values are selected as new features of the samples. Thus, the inter-class distance is increased and the intra-class spacing of the samples is reduced. Then, the samples are mapped to the neurons with the mutual coupling strength determined by the distance of the samples. The separability of the different clusters is improved by the self-learning of the mutual coupling strength. Finally, the strong coupled subset in the neural network is utilized as clustering result. The comparative experiments are conducted on synthetic and real datasets. The results show that the proposed algorithm achieves better clustering performance.
2021 Vol. 34 (10): 957-968 [Abstract] ( 350 ) [HTML 1KB] [ PDF 1386KB] ( 175 )
模式识别与人工智能
 

Supervised by
China Association for Science and Technology
Sponsored by
Chinese Association of Automation
NationalResearchCenter for Intelligent Computing System
Institute of Intelligent Machines, Chinese Academy of Sciences
Published by
Science Press
 
Copyright © 2010 Editorial Office of Pattern Recognition and Artificial Intelligence
Address: No.350 Shushanhu Road, Hefei, Anhui Province, P.R. China Tel: 0551-65591176 Fax:0551-65591176 Email: bjb@iim.ac.cn
Supported by Beijing Magtech  Email:support@magtech.com.cn