Generalized LVQ Algorithm Considering Feature Data Range
HU Yao-Min1, 2, LIU Wei-Ming1
1.School of Civil Engineering and Transportation, South China University of Technology, Guangzhou 510641
2.School of Information Engineering, Guangzhou Panyu Polytechnic, Guangzhou 511483
The difference of feature data range is ignored when Euclidean distance is used as a vector similarity metric. And thus, the classification accuracies of the traditional learning vector quantization algorithm (LVQ) and its variants are affected. To solve the problem, a vector similarity metric is proposed and based on this metric and generalized LVQ(GLVQ), an algorithm, GLVQ-Range, is put forward. The classification accuracy and the computation speed of the proposed algorithm are tested on 8 datasets of UCI machine learning repository, compared with those of the traditional alternative LVQ algorithms. The practicability of the proposed algorithm in real production environment is verified on the video vehicle classification dataset.
[1] Gray R M. Vector Quantization. IEEE ASSP Magazine, 1984, 1(2): 4-29
[2] Kohonen T. The Self-Organizing Map. Proc of the IEEE, 1990, 78(9): 1464-1480
[3] Kohonen T. Essentials of the Self-Organizing Map. Neural Networks, 2013, 37(1): 52-65
[4] Lamberti L, Camastra F. Handy: A Real-Time Three Color Glove-Based Gesture Recognizer with Learning Vector Quantization. Expert Systems with Applications, 2012, 39(12): 10489-10494
[5] Bojer T,Hammer B,Schunk D, et al.Relevance Determination in Learning Vector Quantization // Proc of the European Symposium on Artificial Neural Network. Brussels, Belgium, 2001: 271-276
[6] Sato A,Yamada K.Generalized Learning Vector Quantization // Touretzky D, Mozer M, Hasselmo M, eds.Advances in Neural Information Processing Systems.Cambridge,USA: MIT Press, 1995, Ⅷ: 423-429
[7] Qin A K, Suganthan P N. A Novel Kernel Prototype-Based Learning Algorithm // Proc of the 17th International Conference on Pattern Recognition. Cambridge, UK, 2004, IV: 621-624
[8] Qin A K, Suganthan P N. Initialization Insensitive LVQ Algorithm Based on Cost-Function Adaptation. Pattern Recognition, 2005, 38(5): 773-776
[9] Wu K L, Yang M S. Alternative Learning Vector Quantization. Pattern Recognition Journal, 2006, 39(3): 351-362
[10] Seo S,Obermayer K.Soft Learning Vector Quantization.Neural Computation, 2003, 15(7): 1589-1604
[11] Schneider P, Biehl M, Hammer B. Distance Learning in Discriminative Vector Quantization. Neural Computation, 2009, 21(10): 2942-2969
[12] Hammer B, Villmann T.Generalized Relevance Learning Vector Quantization. Neural Networks, 2002, 15(8/9): 1059-1068
[13] Schneider P, Hammer B, Biehl M. Adaptive Relevance Matrices in Learning Vector Quantization. Neural Computation, 2009, 21(12): 3532-3561
[14] Beyer K S, Goldstein J, Ramarkrishnan R.When Is Nearest Neighbor Meaningful? // Proc of the 7th International Conference on Database Theories. London, UK: Springer-Verlag, 1999: 217-235
[15] Sismanis Y, Roussopoulos N. The Dwarf Data Cube Eliminates the High Dimensionality Curse[EB/OL].[2012-11-01]. http://www.cs.umd.edu/Library/TRs/CS-TR-4552/CS-TR-4552.pdf
[16] Hammer B, Strickert M, Villmann T. On the Generalization Ability of GRLVQ Networks. Neural Processing Letters, 2005, 21(2): 109-120