XIE Zhaomin,CHEN Yongming,GUO Jun.Emotion Recognition from Electroencephalogram(EEG)Signals Using a Hybrid Convolutional and Long Short-term Memory Neural Network Model[J].Journal of Chengdu University of Information Technology,2026,41(02):147-153.[doi:10.16836/j.cnki.jcuit.2026.02.002]
基于CNN+LSTM混合神经模型的脑电情绪识别
- Title:
- Emotion Recognition from Electroencephalogram(EEG)Signals Using a Hybrid Convolutional and Long Short-term Memory Neural Network Model
- 文章编号:
- 2096-1618(2026)02-0147-07
- Keywords:
- emotion recognition; CNN; LSTM; multi-channel fusion
- 分类号:
- TP391.1
- 文献标志码:
- A
- 摘要:
- 随着青少年抑郁症人数增加,情绪识别的话题受到广泛关注,情绪识别分类的准确性在医疗领域有着广泛的应用。因此,快速准确的情绪识别在研究领域中有着重要的意义。现有的情绪识别方法要么准确率不高,要么对数据有要求,抑或模型计算缓慢造成时间成本流失。为此,提出一种结合CNN和LSTM模型的多通道融合神经网络模型的分类方法。在数据预处理过程中进行信号切片处理,然后将数据提取频域特征,选择14个与大脑情绪区域关联度高的脑电通道,数据输入本文所采用的模型,得到分数值,再进行二分类实验。不仅能够更好利用局部信号的信息,还能将空间和时间信息结合起来分析,提升分类精度。通过对比随机森林的实验结果,发现提出的卷积神经网络模型与LSTM模型结合起来更加提升了分类的准确性,捕捉了数据在时间维度上的特征变化,与机器学习相比较实验效果更加可观。
- Abstract:
- With the increasing number of teenagers suffering from depression, the topic of emotion recognition has also attracted widespread attention. The accuracy of emotion recognition classification is broadly applied in the medical field. Therefore, quickly and accurately recognizing emotions is of great significance in the research field. Existing methods for emotion recognition are either not accurate enough, have data requirements, or have slow model calculations, resulting in the loss of time costs. To address this, a multi-channel fusion neural network model classification method that combining CNN and LSTM models is proposed. In the data preprocessing process, signal slicing is performed, and then frequency domain features are extracted from the data by selecting 14 EEG channels that are highly related to the emotional areas of the brain. The data is input into the model used in this paper to obtain a score, and a binary classification experiment is conducted. This not only makes better use of the information of local signals but also combines spatial and temporal information for analysis, improving classification accuracy. The experiment fully verified the effectiveness of combining spatial and temporal features for efficient analysis. Compared with the experimental results of the random forest model, the model in this paper has significantly improved the classification accuracy, opening up a new research path for the field of affective computing with EEG signals.
参考文献/References:
[1] Cowie R,Douglas-Cowie E,Tsapatsoulis N,et al.Emotion recognition in human-computer interaction[J].IEEE Signal Processing Magazine,2001,18(1):32-80.
[2] Xia Y,Yu H,Wang X,et al.Relation-aware facial expression recognition[J].IEEE Transactions on Cognitive and Developmental Systems,2022,14(3):1143-1154.
[3] Zheng W L,Dong B N,Lu B L.Multimodal emotion recognition using EEG and eye tracking data[J].In:Annual International Conference of the IEEE Engineering in Medicine and Biology-Proceedings,2014:5040-5043.
[4] Chao H,Zhi H,Dong L,et al.Recognition of Emotions Using Multichannel EEG Data and DBN-GC-Based Ensemble Deep Learning Framework[J].Computational Intelligence and Neuroscience,2018:9750904.
[5] Jenke R,Peer A,Buss M.Feature Extraction and Selection for Emotion Recognition from EEG[J].IEEE Transactions On Affective Computing,2014,5(3):327-339.
[6] Oude Bos D.EEG-based Emotion Recognition[J].Applied Sciences-Basel,2006,7(10).
[7] Gao Y,Lee H J,Mehmood R M.Deep Learning of EEG Signals for Emotion Recognition[J].IEEE International Conference on Multimedia & Expo Workshops(ICMEW),2015:1-5.
[8] Verma P.A Comparative Study of Machine Learning Algorithms for Emotion Recognition using DEAP EEG Dataset[J].International Journal of Mechanical Engineering,2021,6(3):4567-4573.
[9] Chen T,Ju S,Ren F,et al.EEG emotion recognition model based on the LIBSVM classifier[J].Measurement,2020,164:108047.
[10] Peng Y,Jin F,Kong W,et al.OGSSL:A semi-supervised classification model coupled with optimal graph learning for EEG emotion recognition[J].IEEE Transactions on Neural Systems and Rehabilitation Engineering,2022,30(4):1288-1297.
[11] Wang T,Huang X,Xiao Z,et al.EEG emotion recognition based on differential entropy feature matrix through 2D-CNN-LSTM network[J].Reserach,2024:49
[12] Yao X,Li T,Ding P,et al.Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning[J].Brain Sciences,2024,14(3):268.
[13] Shen F,Dai G,Lin G,et al.EEG-based emotion recognition using 4D convolutional recurrent neural network[J].Cogn Neurodyn,2020,14:815-828.
[14] Yang H,Han J,Min K.A Multi-Column CNN Model for Emotion Recognition from EEG Signals[J].Sensors,2019,19(21):4736.
[15] Iyer A,Das S,Teotia R,et al.CNN and LSTM based ensemble learning for human emotion recognition using EEG recordings[J].Multimedia Tools and Applications,2023,82:4883-4896.
[16] Islam MR,Islam MM,Rahman MM,et al.EEG Channel Correlation Based Model for Emotion Recognition[J].Computers in Biology and Medicine,2021,136:104757.
[17] 张博,刘璐,杨立波,等.基于时域、频域脑电(EEG)特征情感分类研究[J].长春理工大学学报(自然科学版),2021,44(5):51-57.
[18] Mei H,Xu X.EEG-based emotion classification using convolutional neural network[C].In:2017 International Conference on Security,Pattern Analysis,and Cybernetics(SPAC),2017:130-135.
[19] 贾小云,王丽艳,陈景霞,等.基于时频域组合特征的脑电信号情感分类算法[J].科学技术与工程,2019,19(33):290-295.
相似文献/References:
[1]秦雷亮,何培宇,方安成,等.基于CNN-LSTM的柯氏音五时相分类方法研究[J].成都信息工程大学学报,2022,37(02):125.[doi:10.16836/j.cnki.jcuit.2022.02.002]
QIN Leiliang,HE Peiyu,FANG Ancheng,et al.Research on Five-phase Classification Method of Korotkoff Sounds based on CNN-LSTM[J].Journal of Chengdu University of Information Technology,2022,37(02):125.[doi:10.16836/j.cnki.jcuit.2022.02.002]
[2]杨 芮,文 武,徐 虹.基于PCC-CNN-GRU的短期风电功率预测[J].成都信息工程大学学报,2022,37(02):165.[doi:10.16836/j.cnki.jcuit.2022.02.009]
YANG Rui,WEN Wu,XU Hong.Short-term Wind Power Prediction based on PCC-CNN-GRU[J].Journal of Chengdu University of Information Technology,2022,37(02):165.[doi:10.16836/j.cnki.jcuit.2022.02.009]
[3]郭楠馨,林宏刚,张运理,等.基于元学习的僵尸网络检测研究[J].成都信息工程大学学报,2022,37(06):615.[doi:10.16836/j.cnki.jcuit.2022.06.001]
GUO Nanxin,LIN Honggang,ZHANG Yunli,et al.Botnet Detection Method based on Meta-Learning Network[J].Journal of Chengdu University of Information Technology,2022,37(02):615.[doi:10.16836/j.cnki.jcuit.2022.06.001]
[4]戴宇睿,安俊秀,李焯炜.基于信号分解降噪的CNN-BiLSTM金融市场趋势预测[J].成都信息工程大学学报,2023,38(01):28.[doi:10.16836/j.cnki.jcuit.2023.01.005]
DAI Yurui,AN Junxiu,LI Zhuowei.CNN-BiLSTM Financial Market Trend Prediction based on Signal Decomposition and Noise Reduction[J].Journal of Chengdu University of Information Technology,2023,38(02):28.[doi:10.16836/j.cnki.jcuit.2023.01.005]
[5]徐鑫鑫,胡建成.基于多结构ResNet模型的心律失常分类问题研究[J].成都信息工程大学学报,2025,40(05):716.[doi:10.16836/j.cnki.jcuit.2025.05.021]
XU Xinxin,HU Jiancheng.Research on Arrhythmia Classification with Multi-Structured ResNet[J].Journal of Chengdu University of Information Technology,2025,40(02):716.[doi:10.16836/j.cnki.jcuit.2025.05.021]
[6]吉佧拉根,杜元花,周 楠.基于情绪识别的脑电图前馈注意力编码网络模型[J].成都信息工程大学学报,2026,41(02):141.[doi:10.16836/j.cnki.jcuit.2026.02.001]
JIKA Lagen,DU Yuanhua,ZHOU Nan.Emotion Recognition-based Deep Electroencephalography Feedforward Attention Encoder Network Model[J].Journal of Chengdu University of Information Technology,2026,41(02):141.[doi:10.16836/j.cnki.jcuit.2026.02.001]
备注/Memo
收稿日期:2024-08-20
基金项目:四川省科技厅资助项目(2022JDR0043); 四川省数值仿真重点实验室项目(KLNS-2023SZFZ002)
通信作者:郭俊.E-mail:junguo0407@cuit.edu.cn
