Abstract:A spatially smooth and complete subspace learning algorithm is proposed for feature extraction and recognition. Based on principle component analysis, spatially smooth subspace learning and locally sensitive discriminant analysis, the proposed algorithm preserves globally and locally geometrical structure and information of discrimination and spatial correlation. The globally geometrical features and locally spatial correlation information are extracted from original data samples, and then they are linearly transformed into new data samples. Subsequently, the best features are extracted for classification. Compared with general subspace learning algorithms, the proposed algorithm improves the recognition rate. Experimental results demonstrate the effectiveness of the proposed algorithm.
[1] Turk M, Pentland A. Eigenface for Recognition. Cognitive Neuroscience, 1991, 3(1): 71-86 [2] Belhumeur P N, Hespenha J P, Kriegman D J. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Trans on Pattern Analysis and Machine Intelligence, 1997, 19(7): 711-720 [3] Roweis S T, Saul L K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 2000, 290(5500): 2323-2326 [4] Tennenbaum J B, de Silve V, Langford J C. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 2000, 290(3500): 2319-2323 [5] Belkin M, Niyoki P. Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering // Leen T K, Dietterich T G, Tresp V, eds. Advances in Neural Information Processing System. Cambridge, USA: MIT Press, 2001, 14: 585-591 [6] He Xiaofei, Cai Deng, Yan Shuicheng, et al. Neighborhood Preserving Embedding // Proc of the 10th IEEE International Conference on Computer Vision. Beijing, China, 2005, Ⅱ: 1208-1213 [7] He Xiaofei, Yan Shuicheng, Hu Yuxiao, et al. Face Recognition Using Laplacianfaces. IEEE Trans on Pattern Analysis and Machine Intelligence, 2005, 27(3): 328-340 [8] Yan Shuicheng, Xu Dong, Zhang Benyu, et al. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Trans on Pattern Analysis and Machine Intelligence, 2007, 29(1): 40-51 [9] Cai Deng, He Xiaofei, Zhu Kun, et al. Locality Sensitive Discriminant Analysis // Proc of the 20th International Joint Conference on Artificial Intelligence. Hyderabad, India, 2007: 708-713 [10] Cai Deng, He Xiaofei, Hu Yuxiao, et al. Learning a Spatially Smooth Subspace Learning // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, USA, 2007: 1-7 [11] You Qubo, Zheng Nanning, Du Shaoyi, et al. Neighborhood Discriminant Projection for Face Recognition. Pattern Recognition Letters, 2007, 28(10): 1156-1163 [12] He Xiaofei, Cai Deng, Han Jiawei. Learning a Maximum Margin Subspace Image Retrieval. IEEE Trans on Knowledge and Data Engineering, 2008, 20(2): 189-201 [13] Li Xuelong, Lin S, Yan Shuicheng, et al. Discriminant Locally Linear Embedding with High-Order Tensor Data. IEEE Trans on System, Man and Cybernetics, 2008, 38(2): 342-352 [14] Martinez A M, Kak A C. PCA versus LDA. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23(2): 228-233 [15] Young Y. The Reliability of Linear Feature Extractor. IEEE Trans on Computer, 1971, 20(9): 967-971 [16] Yu Hua, Yang Jie. A Direct LDA Algorithm for High-Dimensional Data with Application to Face Recognition. Pattern Recognition, 2001, 34(10): 2067-2070 [17] Fukunaga K. Introduction to Statistical Pattern Recognition. 2nd Edition. New York, USA: Academic Press, 1990