|
|
Two-Stage Image Classification Method Based on Three-Way Decisions |
CHEN Chaofan1,2, ZHANG Hongyun1,2, CAI Kecan1,2, MIAO Duoqian1,2 |
1. College of Electronics and Information Engineering, Tongji University, Shanghai 201804 2. The Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 201804 |
|
|
Abstract A single model cannot handle the uncertainty in prediction results effectively, and therefore, the shadowed sets theory is introduced into image classification from the perspective of three-way decisions and a two-stage image classification method is designed. Firstly, samples are classified by convolutional neural networks to obtain the membership matrix. Then, a sample partitioning algorithm based on shadowed sets is employed to process the membership matrix and consequently the uncertain part of the classification results, the uncertain domain, for delayed decision making is obtained. Finally, feature fusion technique is utilized and SVM is regarded as a classifier for secondary classification to reduce the uncertainty of the classification results and improve the classification accuracy. Experiments on CIFAR-10 and Caltech 101 datasets validate the effectiveness of the proposed method.
|
Received: 07 May 2021
|
|
Fund:National Natural Science Foundation of China(No.62076182,61976158,61976160) |
Corresponding Authors:
ZHANG Hongyun, Ph.D., associate professor. Her research interests include principal curve algorithm, granular computing and fuzzy sets.
|
About author:: CHEN Chaofan, master student. His research interests include image classification, deep learning and granular computing.CAI Kecan, Ph.D. candidate. Her research interests include image classification and granular computing.MIAO Duoqian, Ph.D., professor. His research interests include artificial intelligence, machine learning, big data analysis and granular computing. |
|
|
|
[1] OJALA T, PIETIKÄINEN M, HARWOOD D. A Comparative Study of Texture Measures with Classification Based on Featured Distributions. Pattern Recognition, 1996, 29(1): 51-59. [2] OJALA T, PIETIKÄINEN M, MAENPAA T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis and Machine Inte-lligence, 2002, 24(7): 971-987. [3] LOWE D G. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 2004, 60(2): 91-110. [4] LOWE D G. Object Recognition from Local Scale-Invariant Features // Proc of the 7th IEEE International Conference on Computer Vision. Washington, USA: IEEE, 1999, II: 1150-1157. [5] DALAL N, TRIGGS B. Histograms of Oriented Gradients for Human Detection // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2005, I: 886-893. [6] COVER T, HART P E. Nearest Neighbor Pattern Classification. IEEE Transactions on Information Theory, 1967, 13(1): 21-27. [7] CORTES C, VAPNIK V. Support-Vector Networks. Machine Lear-ning, 1995, 20: 273-297. [8] BREIMAN L. Random Forests. Machine Learning, 2001, 45: 5-32. [9] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet Classification with Deep Convolutional Neural Networks. Communications of the ACM, 2017, 60(6): 84-90. [10] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition[C/OL]. [2021-04-22]. https://arxiv.org/pdf/1409.1556.pdf. [11] SZEGEDY C, LIU W, JIA Y Q, et al. Going Deeper with Convolutions // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2015. DOI: 10.1109/CVPR.2015.7298594. [12] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778. [13] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 42(8): 2011-2023. [14] IANDOLA F N, HAN S, MOSKEWICZ M W, et al. SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and < 0.5 MB Model Size[C/OL]. [2021-04-22]. https://arxiv.org/pdf/1602.07360.pdf. [15] HOWARD A G, ZHU M L, CHEN B, et al. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications[C/OL]. [2021-04-22]. https://arxiv.org/pdf/1704.04861v1.pdf. [16] PEDRYCZ W. Shadowed Sets: Representing and Processing Fuzzy Sets. IEEE Transactions on Systems, Man, and Cybernetics (Cybernetics), 1998, 28(1): 103-109. [17] MITRA S, PEDRYCZ W, BARMAN B. Shadowed c-means: Integrating Fuzzy and Rough Clustering. Pattern Recognition, 2010, 43(4): 1282-1291. [18] ZHOU J, PEDRYCZ W, MIAO D Q. Shadowed Sets in the Characterization of Rough-Fuzzy Clustering. Pattern Recognition, 2011, 44(8): 1738-1749. [19] 苏小红,赵玲玲,谢 琳,等.阴影集的模糊支持向量机样本选择方法.哈尔滨工业大学学报, 2012, 44(9): 78-84. (SU X H, ZHAO L L, XIE L, et al. Shadowed Sets-Based Sample Selection Method for Fuzzy Support Vector Machine. Journal of Harbin Institute of Technology, 2012, 44(9): 78-84.) [20] 周 玉,钱 旭,王自强.基于阴影集数据选择的可拓神经网络性能改进.北京工业大学学报, 2013, 39(3): 430-437. (ZHOU Y, QIAN X, WANG Z Q. Performance Improvement of Extension Neural Network Using Data Selection Method Based on Shadowed Sets. Journal of Beijing University of Technology, 2013, 39(3): 430-437.) [21] MITRA S, KUNDU P P. Satellite Image Segmentation with Sha-dowed c-means. Information Sciences, 2011, 181(17): 3601-3613. [22] ZHANG H Y, ZHANG T, PEDRYCZ W, et al. Improved Adaptive Image Retrieval with the Use of Shadowed Sets. Pattern Recognition, 2019, 90: 390-403. [23] YAO Y Y. Three-Way Decisions with Probabilistic Rough Sets. Information Sciences, 2010, 180(3): 341-353. [24] YAO Y Y. The Superiority of Three-Way Decisions in Probabilistic Rough Set Models. Information Sciences, 2011, 181(6): 1080-1096. [25] PEDRYCZ W. From Fuzzy Sets to Shadowed Sets: Interpretation and Computing. International Journal of Intelligent Systems, 2009, 24(1): 48-61. [26] KABIR H M D, ABDAR M, JALALI S M J, et al. SpinalNet: Deep Neural Network with Gradual Input[C/OL]. [2021-04-22]. https://arxiv.org/pdf/2007.03347v2.pdf. [27] HUSSAIN N, KHAN M A, SHARIF M, et al. A Deep Neural Network and Classical Features Based Scheme for Objects Recognition: An Application for Machine Inspection. Multimedia Tools and Applications, 2020. DOI: 10.1007/s11042-020-08852-3. [28] WANG G K, WANG K Z, LIN L. Adaptively Connected Neural Networks // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 1781-1790. [29] LIU Q, MUKHOPADHYAY S. Unsupervised Learning Using Pretrained CNN and Associative Memory Bank // Proc of the International Joint Conference on Neural Networks. Washington, USA: IEEE, 2018: 1-8. [30] LOUSSAIEF S, ABDELKRIM A. Deep Learning vs. Bag of Features in Machine Learning for Image Classification // Proc of the International Conference on Advanced Systems and Electric Technologies. Washington, USA: IEEE, 2018: 6-10. [31] RASHID M, KHAN M A, SHARIF M, et al. Object Detection and Classification: A Joint Selection and Fusion Strategy of Deep Con-volutional Neural Network and SIFT Point Features. Multimedia Tools and Applications, 2019, 78(12): 15751-15777. [32] SINGH A, KINGSBURY N. Efficient Convolutional Network Lear-ning Using Parametric Log Based Dual-Tree Wavelet ScatterNet // Proc of the IEEE International Conference on Computer Vision Workshops. Washington, USA: IEEE, 2017, I: 1140-1147. [33] ZHENG Q H, YANG M Q, ZHANG Q R, et al. Understanding and Boosting of Deep Convolutional Neural Network Based on Sample Distribution // Proc of the 2nd IEEE Information Technology, Networking, Electronic and Automation Control Conference. Washington, USA: IEEE, 2017: 823-827. [34] SHAH A, KADAM E, SHAH H, et al. Deep Residual Networks with Exponential Linear Unit // Proc of the 3rd International Symposium on Computer Vision and the Internet. New York, USA: ACM, 2016: 59-65. [35] LEE C Y, GALLAGHER P W, TU Z W. Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree // Proc of the 19th International Conference on Artificial Intelligence and Statistics. New York, USA: ACM, 2016: 464-472. |
|
|
|