|
|
An Enhanced TranCo-Training Categorization Model with Transfer Learning |
TANG Huan-Ling1,YU Li-Ping1,LU Ming-Yu2 |
1. Key Laboratory of Intelligent Information Processing in Universities of Shandong Shandong Institute of Business and Technology,Yantai 264005 2. Information Science and Technology College,Dalian Maritime University,Dalian 116026 |
|
|
Abstract When unlabeled data draw from different distributions compared with labeled data in semi-supervise learning,the topic biases the target domain and the performance of semi-supervised classifier decreases. The transfer technique is applied to improve the performance of semi-supervised learning in this paper. An enhanced categorization model called TranCo-training is studied which combines transfer learning techniques with co-training methods. The transferability of each unlabeled instance is computed by an important component of TranCo-training according to the consistency with its labeled neighbors. At each iteration,unlabeled instances are transferred from auxiliary dataset according to their transfer ability. Theoretical analysis indicates that transfer ability of an unlabeled instance is inversely proportional to its training error,which minimizes the training error and avoids negative transfer. Thereby,the problem of topic bias in semi-supervised learning is solved. The experimental results show that TranCo-training algorithm achieves better performance than the RdCo-training algorithm when a few labeled data on target domain and abundant unlabeled data on auxiliary domain are provided.
|
Received: 29 November 2012
|
|
|
|
|
[1] Blum A ,Mitchell T. Combining Labeled and Unlabeled Data with Co-Training // Proc of the 11th Annual Conference on Computational Learning Theory. Madisson,USA,1998: 92-100 [2] Zhou Z H ,Li M. Tri-Training: Exploiting Unlabeled Data Using Three Classifiers. IEEE Trans on Knowledge and Data Engineering,2005,17(11): 1529-1541 [3] Wu Jun,Duan Jing,Lu Mingyu. SVM Active Feedback Scheme Using Semi-Supervised Ensemble with Bias. Pattern Recognition and Artificial Intelligence,2010,23(6): 745-751 (in Chinese) (邬 俊,段 晶,鲁明羽.基于偏袒性半监督集成的SVM主动反馈方案.模式识别与人工智能,2010,23(6): 745-751) [4] Pan S J,Yang Qiang. A Survey on Transfer Learning. IEEE Trans on Knowledge and Data Engineering,2010,22(10): 1345-1359 [5] Dai Wenyuan,Yang Qiang,Xue Guirong,et al. Boosting for Transfer Learning // Proc of the 24th International Conference on Machine Learning. Corvallis,USA,2007: 193-200 [6] Dai Wenyuan,Xue Guirong,Yang Qiang,et al. Transferring Naive Bayes Classifiers for Text Classification // Proc of the 22nd AAAI Conference on Artificial Intelligence. Vancouver,Canada,2007: 540-545 [7] Eaton E,DesJardins M,Lane T. Modeling Transfer Relationships between Learning Tasks for Improved Inductive Transfer // Proc of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Antwerp,Belgium,2008: 317-332 [8] Eaton E,DesJardins M. Selective Transfer between Learning Tasks Using Task-Based Boosting // Proc of the 25th AAAI Conference on Artificial Intelligence. San Francisco,USA,2011: 337-342 [9] Daumé III H,Marcu D. Domain Adaptation for Statistical Classifier. Journal of Artificial Intelligence Research,2006,26(1): 101-126 [10] Xue Guirong,Dai Wenyuan,Yang Qiang,et al. Topic-Bridged PLSA for Cross-Domain Text Classification // Proc of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Singapore,Singapore,2008: 627-634 [11] Raina R,Ng A Y,Koller D. Constructing Informative Priors Using Transfer Learning // Proc of the 23rd International Conference on Machine Learning. Pittsburgh,USA,2006: 713-720 [12] Wang Huayan,Yang Qiang. Transfer Learning by Structural Analogy. [DB/OL]. [2012-01-10]. http://www.stanford.edu/~huayanw/aaai11_analogy.pdf [13] Pan S J,Tsang I W,Kwok J T,et al. Domain Adaptation via Transfer Component Analysis. IEEE Trans on Neural Networks,2011,22(2): 199-210 [14] Gao Jing,Fan Wei,Jiang Jing,et al. Knowledge Transfer via Multiple Model Local Structure Mapping // Proc of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Las Vegas, USA,2008: 283-291 [15] Wang Hao,Gao Yang,Chen Xingguo. Transfer of Reinforcement Learning: The State of the Art. Acta Electronica Sinica,2008,36(S1): 39-43 (in Chinese) (王 皓,高 阳,陈兴国.强化学习中的迁移:方法和进展.电子学报,2008,36(S1): 39-43) [16] Lewis D D. Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval // Proc of the 10th European Conference on Machine Learning. Chemnitz,Germany,1998: 4-15 [17] Tang Huanling,Wu Jun,Lin Zhengkui,et al. An Enhanced AdaBoost Algorithm with Naive Bayesian Text Categorization Based on a Novel Re-weighting Strategy. International Journal of Innovative Computing,Information and Control,2010,6(11): 5299-5230 [18] Jin Rong,Zhang Jian. Multi-Class Learning by Smoothed Boosting. Machine Learning,2007,67(3): 207-227 [19] Tang Huanling,Sun Jiantao,Lu Yuchang. A Weight Adjustment Technique with Feature Weight Function Named TEF-WA in Text Categorization. Journal of Computer Research and Development,2005,42(1): 47-53 (in Chinese) (唐焕玲,孙建涛,陆玉昌.文本分类中结合评估函数的TEF-WA权值调整.计算机研究与发展,2005,42(1): 47-53) |
|
|
|