Worst Separation Spatially Smooth Discriminant Analysis with Constrained Average Compactness
NIU Lu-Lu1, CHEN Song-Can1,2, YU Lu3
1.College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 2.State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093 3.Institute of Communications Engineering, PLA University of Science and Technology, Nanjing 210007
Abstract:Spatially Smooth Linear Discriminant Analysis(SLDA) and IMage Euclidean Distance Discriminant Analysis(IMEDA)combined with spatial structure information of the imagesare two main discriminant methods to reduce dimension, and the classification performance of SLDA and IMEDA is better than that of LDA. Different from SLDA and IMEDA, the solutions in the proposed algorithms called WSLDA and WIMEDAare obtained by parameterizing projection directions, maintaining an upper bound for average within-class scatter and maximizing the minimal between-class scatter.Also their solution can simply be attributed to solve a well-known eigenvalue optimization problem called minimization for the maximal eigenvalue of a symmetric matrix. It overcomes the shortcoming that many algorithms need to use full eigenvalue decomposition. In addition, experiments on standard face dataset Yale、AR and FERET validate the effectiveness of WSLDA and WIMEDA.
[1] Duda R O, Hart P E, Stork D G. Pattern Classification. 2nd edition. Hoboken, USA: Wiley-Interscience, 2000 [2] Fisher R A. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 1936, 7(2): 179-188 [3] Turk M, Pentland A. Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 1991, 3(1): 71-86 [4] Cai D, He X F, Hu Y X, et al. Learning a Spatially Smooth Subspace for Face Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis. USA, 2007: 1-7 [5] Wang L W, Zhang Y, Feng J F. On the Euclidean Distance of Images. IEEE Trans on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1334-1339 [6] Gu S C, Tan Y, He X G. Laplacian Smoothing Transform for Face Recognition. SCIENCE CHINA Information Sciences, 2010, 53(12): 2415-2428 [7] Zhang Y, Yeung D Y. Worst-Case Linear Discriminant Analysis // Proc of the 24th Annual Conference on Neural Information Processing Systems. Vancouver, Canada, 2010: 2568-2576 [8] Vapnik V N. The Nature of Statistical Learning Theory. Berlin, Germany: Springer Verlag, 1995: 1-50 [9] Yan S C, Xu D, Zhang B Y, et al. Graph Embedding and Extension: A General Framework for Dimensionality Reduction. IEEE Trans on Pattern Analysis and Machine Intelligence, 2007, 29(1): 40-51 [10] Vandenberghe L, Boyd S. Semidefinite Programming. SIAM Review, 1996, 38(1): 49-95 [11] Ying Y M, Li P. Distance Metric Learning with Eigenvalue Optimization. Jounal of Machine Learning Research, 2012, 13(1): 1-26 [12] Lewis A S, Overton M L, Burke J V, et al. Eigenvalue Optimization. Acta Numerica, 1996, 5(1): 149-190 [13] Overtron M L. On Minimizing the Maximum Eigenvalue of a Symmetric Matrix. SIAM Journal on Matrix Analysis and Applications, 1988, 9(2): 256-268 [14] Frank M, Wolfe P. An Algorithm for Quadratic Programming. Naval Research Logistics, 1956, 3(1/2): 95-110 [15] Hazan E. Sparse Approximation Solutions to Semidefinite Programs // Eduardo Sany Laber, Claudson Bornstein, Loana Tito Nogueira, et al., eds. LATIN 2008: Theoretical Informatics. Berlin, Germany: Springer, 2008: 306-316 [16] Nesterov Y. Smooth Minimization of Non-smooth Functions. Mathematical Programming, 2005, 103(1): 127-152 [17] Zuo W M, Liu L, Wang K Q, et al. Spatially Smooth Subspace Face Recognition Using LOG and DOG Penalties // Proc of the 6th International Symposium on Neural Networks. Wuhan, China, 2009: 439-448 [18] Friedman J H. Regularized Discriminant Analysis. Journal of the American Statistical Association, 1989, 84(405): 165-175