Group Sample Learning to Rank Approach Based on Likelihood Loss Function
LIN Yuan1, XU Bo2, SUN Xiaoling1, LIN Hongfei2, XU Kan2
1.Faculty of Humanities and Social Sciences, Dalian University of Technology, Dalian 116024 2.School of Computer Science and Technology, Dalian University of Technology, Dalian 116024
Abstract:Group sample used for training the ranking model provides a new idea to construct learning to rank methods. In this paper, the new loss function is constructed for group samples to train the learning to rank model. The preference-weighted loss function and the initial ranking list optimization are employed to construct a new group learning to rank method based on neural network. Experimental results show that the proposed approach is effective in improving ranking performance.
林原,徐博,孙晓玲,林鸿飞,许侃. 基于似然损失函数的组样本排序学习方法*[J]. 模式识别与人工智能, 2017, 30(3): 235-241.
LIN Yuan, XU Bo, SUN Xiaoling, LIN Hongfei, XU Kan. Group Sample Learning to Rank Approach Based on Likelihood Loss Function. , 2017, 30(3): 235-241.
[1] ROBERTSON S E, WALKER S. Some Simple Effective Approximations to the 2-Poisson Model for Probabilistic Weighted Retrieval // Proc of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: Springer-Verlag, 1994: 232-241. [2] LIU T Y. Learning to Rank for Information Retrieval. Foundations and Trends in Information Retrieval, 2009, 3(3): 225-331. [3] LIN Y, LIN H F, YE Z, et al. Learning to Rank with Groups // Proc of the 19th ACM International Conference on Information and Knowledge Management. New York, USA: ACM, 2010: 1589-1592. [4] LIN Y, LIN H F, WU J J, et al. Learning to Rank with Cross Entropy // Proc of the 20th ACM International Conference on Information and Knowledge Management. New York, USA: ACM, 2011:2057-2060. [5] LIN Y, LIN H F, XU K, et al. Group-Enhanced Ranking. Neurocomputing, 2015, 150(A): 99-105. [6] TAX N, BOCKTING S, HIEMSTRA D. A Cross-Benchmark Comparison of 87 Learning to Rank Methods. Information Processing & Management, 2015, 51(6): 757-772. [7] LI P, BURGES C J C, WU Q. McRank: Learning to Rank Using Multiple Classification and Gradient Boosting // PLATT J C, KOLLER D, SINGER Y, et al., eds. Advances in Neural Information Processing Systems 20. Cambridge, USA: The MIT Press, 2007: 845-852. [8] FREUND Y, SCHAPIRE R E. A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting. Journal of Computer and System Sciences, 1997, 55(1): 119-139. [9] JOACHIMS T. Optimizing Search Engines Using Clickthrough Data // Proc of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA: ACM, 2002: 133-142. [10] BURGES C, SHAKED T, RENSHAW E, et al. Learning to Rank Using Gradient Descent // Proc of the 22nd International Confe-rence on Machine Learning. New York, USA: ACM, 2005: 89-96. [11] XIA F, LIU T Y, WANG J, et al. Listwise Approach to Learning to Rank: Theorem and Algorithm // Proc of the 25th International Conference on Machine Learning. New York, USA: ACM, 2008: 1192-1199. [12] CAO Z, QIN T, LIU T Y, et al. Learning to Rank: From Pairwise Approach to Listwise Approach // Proc of the 24th International Conference on Machine Learning. New York, USA: ACM, 2007: 129-136. [13] YUE Y S, FINLEY T, RADLINSKI F. A Support Vector Method for Optimizing Average Precision // Proc of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: ACM, 2007: 271-278. [14] CHAKRABARTI S, KHANNA R, SAWANT U, et al. Structured Learning for Non-smooth Ranking Losses // Proc of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA: ACM, 2008: 88-96. [15] ZHANG P, LIN H F, LIN Y, et al. Learning to Rank by Optimizing Expected Reciprocal Rank // Proc of the 7th Asia Conference on Information Retrieval Technology. Berlin, Germany: Springer, 2011: 93-102. [16] YILMAZ E, ROBERTSON S. On the Choice of Effectiveness Measures for Learning to Rank. Information Retrieval, 2010, 13(3): 271-290. [17] ASADI N, LIN J. Training Efficient Tree-Based Models for Document Ranking // Proc of the 25th European Conference on Information Retrieval. Berlin, Germany: Springer, 2013: 146-157. [18] SONG Y F, NG W, LEUNG K, et al. SFP-Rank: Significant Frequent Pattern Analysis for Effective Ranking. Knowledge and Information Systems, 2015, 43(3): 529-553. [19] 花贵春,张 敏,刘奕群,等.基于查询聚类的排序学习算法.模式识别与人工智能, 2012, 25(1): 118-123. (HUA G C, ZHANG M, LIU Y Q, et al. Learning to Rank Based on Query Clustering. Pattern Recognition and Artificial Intelligence, 2012, 25(1): 118-123.) [20] 徐 博,林鸿飞,林 原,等.一种基于排序学习方法的查询扩展技术.中文信息学报, 2015, 29(3): 155-161. (XU B, LIN H F, LIN Y, et al. A Query Expansion Method Based on Learning to Rank. Journal of Chinese Information Processing, 2015, 29(3): 155-161.) [21] 黄震华,张佳雯,田春岐,等.基于排序学习的推荐算法研究综述.软件学报, 2016, 27(3): 691-713. (HUANG Z H, ZHANG J W, TIAN C Q, et al. Survey on Lear-ning-to-Rank Based Recommendation Algorithms. Journal of Software, 2016, 27(3): 691-713.) [22] ZHOU K, XUE G R, ZHA H Y, et al. Learning to Rank with Ties // Proc of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: ACM, 2008: 275-282. [23] LIU T Y, XU J, QIN T, et al. LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval[C/OL]. [2016-10-21]. http://research.microsoft.com/en-us/um/people/taoqin/papers/qin-LR4IR.pdf.