[1] BLEI D M, MCAULIFFE J D. Supervised Topic Models[C/OL]. [2017-12-20]. https://arxiv.org/pdf/1003.0783.pdf.
[2] ZHU J, AHMED A, XING E P. MedLDA: Maximum Margin Supervised Topic Models. Journal of Machine Learning Research, 2012, 13: 2237-2278.
[3] RODRIGUES F, LOURENCO M, RIBEIRO B, et al. Learning Supervised Topic Models for Classification and Regression from Crowds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2409-2422.
[4] WANG C, BLEI D, LI F F. Simultaneous Image Classification and Annotation // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2009: 1903-1910.
[5] HOU S, CHEN L, TAO D, et al. Multi-layer Multi-view Topic Model for Classifying Advertising Video. Pattern Recognition, 2017, 68: 66-81.
[6] CHEN C, ZARE A, TRINH H N, et al. Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation. IEEE Transactions on Image Processing, 2017, 26(12): 5590-5602.
[7] WANG J D, ZHAO Z, ZHOU J Z, et al. Recommending Flickr Groups with Social Topic Model. Information Retrieval, 2012, 15(3/4): 278-295.
[8] QIU Z C, SHEN H. User Clustering in a Dynamic Social Network Topic Model for Short Text Streams. Information Sciences, 2017, 414: 102-116.
[9] BLEI D M, NG A Y, JORDAN M I. Latent Dirichlet Allocation. Journal of Machine Learning Research, 2003, 3: 993-1022.
[10] BLEI D M, LAFFERTY J D. Correlated Topic Models[C/OL]. [2017-12-20]. http://people.ee.duke.edu/~lcarin/Blei2005CTM.pdf.
[11] LI W, MCCALLUM A. Pachinko Allocation: DAG-Structured Mixture Models of Topic Correlations // Proc of the 23rd International Conference on Machine Learning. New York, USA: ACM, 2006: 577-584.
[12] BLEI D M, LAFFERTY J D. Dynamic Topic Models // Proc of the 23rd International Conference on Machine Learning. New York, USA: ACM, 2006: 113-120.
[13] WANG C, BLEI D, HECKERMAN D. Continuous Time Dynamic Topic Models // Proc of the 24th Conference on Uncertainty in Artificial Intelligence. Helsinki, Finland: AUAI, 2008: 579-586.
[14] ZHAO W X, JIANG J, WENG J S, et al. Comparing Twitter and Traditional Media Using Topic Models // Proc of the European Conference on Information Retrieval. Berlin, Germany: Springer, 2011: 338-349.
[15] LIU G L, XU X F, ZHU Y, et al. An Improved Latent Dirichlet Allocation Model for Hot Topic Extraction // Proc of the 4th IEEE International Conference on Big Data and Cloud Computing. Wa-shington, USA: IEEE, 2015: 470-476.
[16] CAO Z Q, LI S J, LIU Y, et al. A Novel Neural Topic Model and Its Supervised Extension // Proc of the 29th AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2015: 2210-2216.
[17] LACOSTE-JULIEN S, SHA F, JORDAN M I. DiscLDA: Discri-minative Learning for Dimensionality Reduction and Classification // KALLER D, SCHUURMANS D, BENGIO Y, et al., eds. Advances in Neural Information Processing Systems 21. Cambridge, USA: The MIT Press, 2009: 897-904.
[18] TADDY M. Multinomial Inverse Regression for Text Analysis. Journal of the American Statistical Association, 2013, 108(503): 755-770.
[19] RABINOVICH M, BLEI D M. The Inverse Regression Topic Model // Proc of the 31st International Conference on Machine Learning. New York, USA: ACM, 2014: 199-207.
[20] ZHANG C, EK CARL H, GRATAL X, et al. Supervised Hierarchical Dirichlet Processes with Variational Inference // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2013: 254-261.
[21] DAI A M, STORKEY A J. The Supervised Hierarchical Dirichlet Process. IEEE Transactions on Pattern Analysis and Machine Inte-lligence, 2015, 37(2): 243-255.
[22] REN Y, WANG Y N, ZHU J. Spectral Learning for Supervised Topic Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 726-739.
[23] KANDEMIR M, KEKEÇ T, YENITERZI R. Supervising Topic Models with Gaussian Processes. Pattern Recognition, 2018, 77: 226-236.
[24] SEBER G A F. Nonlinear Regression Models // SEBER G A F, ed. The Linear Model and Hypothesis. Berlin, Germany: Sprin-ger, 2015: 117-128 .
[25] LECUN Y, BENGIO Y, HINTON G. Deep Learning. Nature, 2015, 521(7553): 436-444.
[26] FARABET C, COUPRIE C, NAJMAN L, et al. Learning Hierarchical Features for Scene Labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1915-1929.
[27] ER MENG J, ZHANG Y, WANG N, et al. Attention Pooling-Based Convolutional Neural Network for Sentence Modelling. Information Sciences, 2016, 373: 1339-1351.
[28] HE K M, ZHANG X Y, REN S Q, et al. Deep Residual Learning for Image Recognition // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 770-778.
[29] BLEI D M, JORDAN M I. Modeling Annotated Data // Proc of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: ACM, 2003: 127-134.
[30] JOACHIMS T. Text Categorization with Support Vector Machines: Learning with Many Relevant Features // Proc of the European Conference on Machine Learning. Berlin, Germany: Springer, 1998: 137-142.
[31] LI X M, OUYANG J H, ZHOU X T, et al. Supervised Labeled Latent Dirichlet Allocation for Document Categorization. Applied Intelligence, 2015, 42(3): 581-593.
[32] LAI S W, XU L H, LIU K, et al. Recurrent Convolutional Neural Networks for Text Classification // Proc of the 29th AAAI National Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2015: 2267-2273.
[33] IYYER M, MANJUNATHA V, BOYD-GRABER J, et al. Deep Unordered Composition Rivals Syntactic Methods for Text Classification // Proc of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confe-rence on Natural Language Processing. Stroudsburg, USA: ACL, 2015: 1681-1691.
[34] PANG B, LEE L. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales // Proc of the 43rd Annual Meeting of the Association for Computational Linguistics. Stroudsburg, USA: ACL, 2005: 115-124. |