|
|
Feature Selection Algorithm Based onGranulation-Fusion for Massive High-Dimension Data |
JI Suqin, SHI Hongbo, Lü Yali, GUO Min |
Faculty of Information Management, Shanxi University of Finance and Economics, Taiyuan 030006 |
|
|
Abstract From a granular computing perspective, a feature selection algorithm based on granulation-fusion for massive and high-dimension data is proposed. By applying bag of little Bootstrap (BLB), the original massive dataset is granulated into small subsets of data (granularity), and then features are selected by constructing multiple least absolute shrinkage and selection operator(LASSO) models on each granularity. Finally, features selected on each granularity are fused with different weights, and feature selection results are obtained on original dataset through ordering. Experimental results on artificial datasets and real datasets show that the proposed algorithm is feasible and effective for massive high-dimension datasets.
|
Received: 08 February 2016
|
|
About author:: JI Suqin (Corresponding author), born in 1972, master, lecturer. Her research interests include data mining and distributed technology.SHI Hongbo, born in 1965, Ph.D., professor. Her research interests include machine learning and data mining.吕亚丽,女,1975年生,博士,副教授,主要研究方向为人工智能、数据挖掘.E-mail:yali.lv2008@gmail.com. L Yali, born in 1975, Ph.D., associate professor. Her research interests include artificial intelligence and data mining.GUO Min, born in 1978, Ph.D. candidate, lecturer. Her research interests include applied statistics. |
|
|
|
[1] 谢娟英,谢维信.基于特征子集区分度与支持向量机的特征选择算法.计算机学报, 2014, 37(8): 1704-1718. (XIE J Y, XIE W X. Several Feature Selection Algorithms Based on the Discernibility of a Feature Subset and Support Vector Machines. Chinese Journal of Computers, 2014, 37(8): 1704-1718.) [2] YU L, LIU H. Efficient Feature Selection via Analysis of Relevance and Redundancy. Journal of Machine Learning Research, 2004, 5: 1205-1224. [3] QIAN Y H, LIANG J Y, PEDRYCZ W, et al. Positive Approximation: An Accelerator for Attribute Reduction in Rough Set Theory. Artificial Intelligence, 2010, 174(9/10): 597-618. [4] 鲍 捷,杨 明,刘会东.高维数据的1-范数支持向量机集成特征选择.计算机科学与探索, 2012, 6(10): 948-953. (BAO J, YANG M, LIU H D. Ensemble Feature Selection Based on 1-Norm Support Vector Machine for High-Dimensional Data. Journal of Frontiers of Computer Science and Technology, 2012, 6(10): 948-953.) [5] LIANG J Y, WANG F, DANG C Y, et al. An Efficient Rough Feature Selection Algorithm with a Multi-granulation View. International Journal of Approximate Reasoning, 2012, 53(6): 912-926. [6] 杨 昙,冯 翔,虞慧群.基于多群体公平模型的特征选择算法.计算机研究与发展, 2015, 52(8): 1742-1756. (YANG T, FENG X, YU H Q. Feature Selection Algorithm Based on the Multi-colony Fairness Model. Journal of Computer Research and Development, 2015, 52(8): 1742-1756.) [7] 冀素琴,石洪波,吕亚丽.基于粒计算与区分能力的属性约简算法.模式识别与人工智能, 2015, 28(4): 327-334. (JI S Q, SHI H B, L Y L. An Attribute Reduction Algorithm Based on Granular Computing and Discernibility. Pattern Recognition and Artificial Intelligence, 2015, 28(4): 327-334.) [8] 徐 计,王国胤,于 洪.基于粒计算的大数据处理.计算机学报,2015, 38(8): 1497-1517. (XU J, WANG G Y, YU H. Review of Big Data Processing Based on Granular Computing. Chinese Journal of Computers, 2015, 38(8): 1497-1517.) [9] YANG C, ZHANG X Y, ZHONG C M, et al. A Spatiotemporal Compression Based Approach for Efficient Big Data Processing on Cloud. Journal of Computer and System Sciences, 2014, 80(8):1563-1583. [10] RUAN J H, WANG X P, SHI Y. Developing Fast Predictors for Large-Scale Time Series Using Fuzzy Granular Support Vector Machines. Applied Soft Computing, 2013, 13(9): 3981-4000. [11] KLEINER A, TALWALKAR A, SARKAR P, et al. A Scalable Bootstrap for Massive Data. Journal of the Royal Statistical Society(Series B), 2014, 76(4): 795-816. [12] 张 海,王 尧,常象宇,等.L1/2正则化.中国科学:信息科学,2010, 40(3): 412-422. (ZHANG H, WANG Y, CHANG X Y, et al. L1/2 Regularization. Scientia Sinica Informationis, 2010, 40(3): 412-422.) [13] 刘建伟,崔立鹏,刘泽宇,等.正则化稀疏模型.计算机学报, 2015, 38(7): 1307-1325. (LIU J W, CUI L P, LIU Z Y, et al. Survey on the Regularized Sparse Models. Chinese Journal of Computers, 2015, 38(7): 1307-1325.) [14] XU H, CARAMANIS C, MANNOR S. Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem. IEEE Trans on Pattern Analysis and Machine Intelligence, 2012, 34(1): 187-193. [15] SIMON N, FRIEDMAN J, HASTIE T, et al. A Sparse Group La- sso. Journal of Computational and Graphical Statistics, 2013, 22(2): 231-245. |
|
|
|