PDF下载 分享
[1]何冰倩,魏 维,宋岩贝,等.融合时空兴趣点和多元广义高斯混合模型的 人体动作识别[J].成都信息工程大学学报,2019,(04):358-364.[doi:10.16836/j.cnki.jcuit.2019.04.006]
 HE Bingqian,WEI Wei,SONG Yanbei,et al.Human Motion Recognition based on Spatiotemporal Interest Points and Multivariate Generalized Gaussian Mixture Models[J].Journal of Chengdu University of Information Technology,2019,(04):358-364.[doi:10.16836/j.cnki.jcuit.2019.04.006]

融合时空兴趣点和多元广义高斯混合模型的 人体动作识别


[1] Aggarwal J K,Ryoo M S.Human Activity Analysis:A Review[J].ACM Computing Surveys,2011,43(3):1-43.
[2] 胡琼,秦磊,黄庆明.基于视觉的人体动作识别综述[J].计算机学报,2013 (12):2512-2524.
[3] Piyathilaka L,Kodagoda S.Gaussian Mixture Based HMM for Human Daily Activity Recognition Using 3D Skeleton Features[C].2013 IEEE 8th Conference on Industrial Electronics and Applications(ICIEA),Melbourne,VIC,Australia,2013:567 -572.
[4] Baxter R H,Robertson N M,Lane D M.Human behavior recognition in data- scarce domains[J].Pattern Recognition,2015,48(8):2377-2393.
[5] Zhou Z,Shi F,Wu W.Learning Spatial and Temporal Extents of Human Actions for Action Detection[J].IEEE Transactions on Multimedia,2015,17(4):512- 525.
[6] Caquetá J M,Carmona EJ,Fernandez-Caballero A.A survey of video datasets for human action and activity recognition[J].Computer Vision and Image Understanding,2013,117(6):633-659.
[7] Bregonzio M,Shaogang G,Tao X.Recognizing action as clouds of space-time interest points[C].2009 IEEE Conference on Computer Vision and Pattern Recognition,2009:1948-1955.
[8] Selmi M,Yacoubi M.E,Dorizzi B.On the sensitivity of spatial-temporal interest points to person identity[C].2012 IEEE Southwest Symposium on Image Analysis and Interpretation,2012:69-72.
[9] Bellamine I,Tairi H. Motion detection and tracking using space-time interest points[C].2013 ACS International Conference on Computer Systems and Applications(AICCSA),2013:1-7.
[10] Hendaoui R,Abdellaoui M,Douik A.Synthesis of spatio-temporal interest point detectors:Harris 3D,MO SIFT and SURF-MHI[C].2014 1st International Conference on Advanced Technologies for Signal and Image Processing (ATSIP),2014:89-94.
[11] Dollar P.Behavior recognition via sparse spatial-temporal features [C].2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance,2005:65-72.
[12] Wang J,Chen Z,Wu Y.Action recognition with multiscale spatial-temporal contexts[C].CVPR 2011:3185-3192.
[13] 丁松涛,曲仕茹.基于改进时空兴趣点检测的人体行为识别算法[J].西北工业大学 学报,2016,34(5):886-892.
[14] Turaga P.Machine Recognition of Human Activities:A Survey[J].IEEE Transactions on Circuits and Systems for Video Technology,2008,18(11):1473-1488.
[15] Shaily S,Mangat V.The Hidden Markov Model and its application to Human Activity Recognition[C].2015 2nd International Conference on Recent Advances in Engineering & Computational Sciences(RAECS),2015:1-4.
[16] Piyathilaka L,Kodagoda S.Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features[C]. 2013 IEEE 8th Conference on Industrial Electronics and Applications(ICIEA),2013:567-572.
[17] Niebles J C,Wang H,FeiFei L.Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words[J].International Journal of Computer Vision,2008,79(3):299-318.
[18] Bruno B.,Human motion modelling and recognition: A computational approach[C].2012 IEEE International Conference on Automation Science and Engineering(CASE),2012:156-161.
[19] Ali N M.Object classification and recognition using Bag-of-Words(Bow) model[C].2016 IEEE 12th International Colloquium on Signal Processing & Its Applications(CSPA),2016:216-220.
[20] Ghildiyal B,Singh A,Bhadauria H S.Image-based monument classification using bag-of-word architecture[C].2017 3rd International Conference on Advances in Computing Communication & Automation(ICACCA)(Fall),2017:1-5.
[21] Piyathilaka L,Kodagoda S.Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features[C].Industrial Electronics & Applications.IEEE,2013.


 CHEN Sheng-di,HE Bing-qian,CHEN Si-yu,et al.Human Action Recognition based on Spatio-Temporal Interest Point[J].Journal of Chengdu University of Information Technology,2018,(04):143.[doi:10.16836/j.cnki.jcuit.2018.02.007]


收稿日期:2018-12-19 基金项目:四川省教育厅重点科研项目(17ZA0064)

更新日期/Last Update: 2019-10-20