HE Bingqian,WEI Wei,SONG Yanbei,et al.Human Motion Recognition based on Spatiotemporal Interest Points and Multivariate Generalized Gaussian Mixture Models[J].Journal of Chengdu University of Information Technology,2019,(04):358-364.[doi:10.16836/j.cnki.jcuit.2019.04.006]
融合时空兴趣点和多元广义高斯混合模型的 人体动作识别
- Title:
- Human Motion Recognition based on Spatiotemporal Interest Points and Multivariate Generalized Gaussian Mixture Models
- 文章编号:
- 2096-1618(2019)04-0358-07
- 关键词:
- 动作识别; 时空兴趣点; Harris-Laplace; MGGMMs; 特征提取
- Keywords:
- action recognition; spatial and temporal interest points; Harris- Laplace; 3D-SIFT; feature extraction
- 分类号:
- TP391.41
- 文献标志码:
- A
- 摘要:
- 人体动作识别近年作为计算机视觉领域的热点研究方向,被广泛用于人机交互、虚拟现 实等领域。针对传统人体动作识别算法中提取特征时冗余点过多、忽略图像数据的关联性等 问题,提出一种融合时空兴趣点和结合定点估计的多元广义高斯混合模型(MGGMMs)的人体动 作识别方法,通过过滤冗余特征点和利用多元广义高斯混合模型实现了特征点的有效提取以 及对数据关联性的充分利用。以改进的Harris-Laplace算法和3D-SIFT描述子提取视频序列 的特征点,利用BOW模型进行视觉词聚类,最后通过改进的多元广义高斯混合模型进行建模和 分类。在KTH公开数据集上进行实验,实验结果表明提出的人体动作识别方法能够对视频中人 体动作进行有效识别和分类。
- Abstract:
- Human action recognition has been widely used in the fields of human- computer interaction and virtual reality, etc. Aiming at the problems of excessive redundancies and neglecting the correlation of image data in traditional human motion recognition algorithm, this paper proposed a human motion recognition method based on spatio-temporal interest points and multivariate generalized Gaussian mixture model combined with fixed-point estimation. Redundant feature points and multivariate generalized Gaussian mixture models are used to effectively extract feature points and make full use of data correlation. The Harris-Laplace algorithm and 3D-SIFT descriptor extracted the feature points of the video sequence, and the BOW model clustered the visual words. Finally, the improved multivariate generalized Gaussian mixture model modeled and classified. Experiments were performed on the KTH datasethe experimental results show that the proposed human motion recognition method can effectively recognize and classify human motion in video.
参考文献/References:
[1] Aggarwal J K,Ryoo M S.Human Activity Analysis:A Review[J].ACM Computing
Surveys,2011,43(3):1-43.
[2] 胡琼,秦磊,黄庆明.基于视觉的人体动作识别综述[J].计算机学报,2013
(12):2512-2524.
[3] Piyathilaka L,Kodagoda S.Gaussian Mixture Based HMM for Human Daily
Activity Recognition Using 3D Skeleton Features[C].2013 IEEE 8th Conference on
Industrial Electronics and Applications(ICIEA),Melbourne,VIC,Australia,2013:567
-572.
[4] Baxter R H,Robertson N M,Lane D M.Human behavior recognition in data-
scarce domains[J].Pattern Recognition,2015,48(8):2377-2393.
[5] Zhou Z,Shi F,Wu W.Learning Spatial and Temporal Extents of Human
Actions for Action Detection[J].IEEE Transactions on Multimedia,2015,17(4):512-
525.
[6] Caquetá J M,Carmona EJ,Fernandez-Caballero A.A survey of video datasets
for human action and activity recognition[J].Computer Vision and Image
Understanding,2013,117(6):633-659.
[7] Bregonzio M,Shaogang G,Tao X.Recognizing action as clouds of space-time
interest points[C].2009 IEEE Conference on Computer Vision and Pattern
Recognition,2009:1948-1955.
[8] Selmi M,Yacoubi M.E,Dorizzi B.On the sensitivity of spatial-temporal
interest points to person identity[C].2012 IEEE Southwest Symposium on Image
Analysis and Interpretation,2012:69-72.
[9] Bellamine I,Tairi H. Motion detection and tracking using space-time
interest points[C].2013 ACS International Conference on Computer Systems and
Applications(AICCSA),2013:1-7.
[10] Hendaoui R,Abdellaoui M,Douik A.Synthesis of spatio-temporal interest
point detectors:Harris 3D,MO SIFT and SURF-MHI[C].2014 1st International
Conference on Advanced Technologies for Signal and Image Processing
(ATSIP),2014:89-94.
[11] Dollar P.Behavior recognition via sparse spatial-temporal features
[C].2005 IEEE International Workshop on Visual Surveillance and Performance
Evaluation of Tracking and Surveillance,2005:65-72.
[12] Wang J,Chen Z,Wu Y.Action recognition with multiscale spatial-temporal
contexts[C].CVPR 2011:3185-3192.
[13] 丁松涛,曲仕茹.基于改进时空兴趣点检测的人体行为识别算法[J].西北工业大学
学报,2016,34(5):886-892.
[14] Turaga P.Machine Recognition of Human Activities:A Survey[J].IEEE
Transactions on Circuits and Systems for Video Technology,2008,18(11):1473-1488.
[15] Shaily S,Mangat V.The Hidden Markov Model and its application to Human
Activity Recognition[C].2015 2nd International Conference on Recent Advances in
Engineering & Computational Sciences(RAECS),2015:1-4.
[16] Piyathilaka L,Kodagoda S.Gaussian mixture based HMM for human daily
activity recognition using 3D skeleton features[C]. 2013 IEEE 8th Conference on
Industrial Electronics and Applications(ICIEA),2013:567-572.
[17] Niebles J C,Wang H,FeiFei L.Unsupervised Learning of Human Action
Categories Using Spatial-Temporal Words[J].International Journal of Computer
Vision,2008,79(3):299-318.
[18] Bruno B.,Human motion modelling and recognition: A computational
approach[C].2012 IEEE International Conference on Automation Science and
Engineering(CASE),2012:156-161.
[19] Ali N M.Object classification and recognition using Bag-of-Words(Bow)
model[C].2016 IEEE 12th International Colloquium on Signal Processing & Its
Applications(CSPA),2016:216-220.
[20] Ghildiyal B,Singh A,Bhadauria H S.Image-based monument classification
using bag-of-word architecture[C].2017 3rd International Conference on Advances
in Computing Communication & Automation(ICACCA)(Fall),2017:1-5.
[21] Piyathilaka L,Kodagoda S.Gaussian mixture based HMM for human daily
activity recognition using 3D skeleton features[C].Industrial Electronics &
Applications.IEEE,2013.
相似文献/References:
[1]陈胜娣,何冰倩,陈思宇,等.基于时空兴趣点的人体动作识别[J].成都信息工程大学学报,2018,(02):143.[doi:10.16836/j.cnki.jcuit.2018.02.007]
CHEN Sheng-di,HE Bing-qian,CHEN Si-yu,et al.Human Action Recognition based on Spatio-Temporal Interest Point[J].Journal of Chengdu University of Information Technology,2018,(04):143.[doi:10.16836/j.cnki.jcuit.2018.02.007]
备注/Memo
收稿日期:2018-12-19 基金项目:四川省教育厅重点科研项目(17ZA0064)