基于seedVII的单模态情绪分类识别
基于深度学习的时空特征建模
这些文献均关注通过混合深度神经网络架构(如CNN、LSTM、Transformer及其变体)同时捕捉EEG信号的时间动态性和空间关联性,以提升情绪分类的精度。
- CLANet: A Hybrid Encoder for EEG Emotion Recognition via Spatiotemporal-Spectral Feature Fusion(Huiwen Chen, Hongyu Liu, Naishi Feng, 2026, Advances in Computer and Materials Scienc Research)
- EEG Emotion Recognition Method Based on 3D Feature Map and Improved DenseNet(Jing-Ran Su Jing-Ran Su, Qiu-Sheng Li Jing-Ran Su, Qian-Li Zhang Qiu-Sheng Li, Jun-Yong Hu Qian-Li Zhang, 2023, 電腦學刊)
- MSBiLSTM-Attention: EEG Emotion Recognition Model Based on Spatiotemporal Feature Fusion(Yahong Ma, Zhentao Huang, Yuyao Yang, Zuowen Chen, Qi Dong, Shanwen Zhang, Yuan Li, 2025, Biomimetics)
- SFE-Net: EEG-based Emotion Recognition with Symmetrical Spatial Feature Extraction(Xiangwen Deng, Ju-xia Zhu, Shangming Yang, 2021, Proceedings of the 29th ACM International Conference on Multimedia)
- EEGformer: A transformer–based brain activity classification method using EEG signal(Z. Wan, Manyu Li, Shichang Liu, Jiajin Huang, Hai Tan, Wenfeng Duan, 2023, Frontiers in Neuroscience)
- Robust EEG-Based Emotion Recognition using CNN: A High-Accuracy Approach with Differential Entropy Features and Spatial-Frequency Domain Analysis on the SEED Dataset(A. Kotwal, M. Verma, J. Manhas, V. Sharma, 2025, Journal of Scientific Research)
- A Multidomain Coupled Spatiotemporal Feature Interaction Model for EEG Emotion Recognition(Liyun Xu, Xiaofang Xing, Jiang Chang, Pan Lin, 2025, IEEE Transactions on Instrumentation and Measurement)
- EEG-ConvoBLSTM: A novel hybrid model for efficient EEG signal classification.(Lihua Zhang, Xin Zhang, Xiu Zhang, Yingjie Yang, 2025, Review of Scientific Instruments)
- IDEA: Intellect database for emotion analysis using EEG signal(Vaishali M. Joshi, R. Ghongade, 2020, Journal of King Saud University - Computer and Information Sciences)
数据驱动的增强策略与预处理
这些研究强调通过数据层面的优化(如数据增强、噪声过滤、合成数据生成)来解决EEG数据集样本量有限或质量不均的问题,从而提升模型的鲁棒性。
- Data-Centric AI for EEG-Based Emotion Recognition: Noise Filtering and Augmentation Strategies(Nadieh Moghadam, Rana Hegazy, 2025, Bioengineering)
- Hybrid CNN-LSTM Model for EEG-Based Emotion Recognition: A Comparative Analysis Using DEAP and SEED Datasets(Alekh Choudhary, Papri Das, Vikas Sharma, Tarun Kumar Vashishth, Sanjukta Vidyant, Sunil Kumar, 2025, 2025 International Conference on Communication, Computer, and Information Technology (IC3IT))
传统机器学习与轻量化特征工程
这些文献对比或采用传统特征工程(如QLBP、频域特征)配合经典分类器(如SVM、随机森林),探讨在资源受限或消费者级设备环境下相较于深度学习的性能优势。
- Single—Channel EEG Signal Based Emotion Recognition System Using Quantum Local Binary Pattern(M. Maithri, U. Raghavendra, Anjan Gudigar, Shaunak Geetprasad, Niha Singhania, Akshay B Salian, S. Praharaj, 2025, 2025 International Conference on Electronics and Computing, Communication Networking Automation Technologies (ICEC2NT))
- Spectral Graph Wavelet Transform-Based Feature Representation for Automated Classification of Emotions From EEG Signal(Rahul Krishna, Kritiprasanna Das, Hemant Kumar Meena, R. B. Pachori, 2023, IEEE Sensors Journal)
- Traditional Machine Learning Outperforms EEGNet for Consumer-Grade EEG Emotion Recognition: A Comprehensive Evaluation with Cross-Dataset Validation(Carlos Rodrigo Paredes Ocaranza, Bensheng Yun, Enrique Daniel Paredes Ocaranza, 2025, Sensors)
应用导向的系统集成与综合分析
这些文献探讨如何将情绪识别技术集成至具体应用场景(如建筑环境、智能空间),或涉及跨领域、跨数据集的分析方法。
- Emotion Analysis AI Model for Sensing Architecture Using EEG(Seung-Yeul Ji, Mi-Kyoung Kim, Han-Jong Jun, 2025, Applied Sciences)
待定或未详述的研究方法
由于所提供文献缺少摘要信息,无法精确归类至特定技术路径,需根据后续文献补全进行重新分析。
- Decoding emotional patterns using NIG modeling of EEG signals in the CEEMDAN domain(Nalini Pusarla, Anurag Singh, S. Tripathi, 2024, International Journal of Information Technology)
- Channel selection and feature extraction on deep EEG classification using metaheuristic and Welch PSD(Huseyin Cizmeci, Caner Ozcan, R. Durgut, 2022, Soft Computing)
- Analysis of the generalization ability of graph neural networks in cross-subject EEG emotion recognition(Lingyue Wang, Lei Guo, Xinsheng Yang, Ying Li, 2026, Neurological Sciences)
- IKKN Predictor: An EEG Signal Based Emotion Recognition for HCI(S. B. Wankhade, D. Doye, 2019, Wireless Personal Communications)
- Analysis of brain areas in emotion recognition from EEG signals with deep learning methods(Musa Aslan, M. Baykara, T. B. Alakuş, 2023, Multimedia Tools and Applications)
- EEG emotion recognition approach using multi-scale convolution and feature fusion(Yong Zhang, Qingguo Shan, Wenyun Chen, Wenzhe Liu, 2024, The Visual Computer)
- EEG emotion recognition across subjects based on deep feature aggregation and multi-source domain adaptation(Kunqiang Lin, Ying Li, Yi He, Zi-Cheng Jiang, Renjie He, Xianzhe Wang, Hongxu Guo, Lei Guo, 2025, Cognitive Neurodynamics)
- Attention with kernels for EEG-based emotion classification(Dongyang Kuang, C. Michoski, 2023, Neural Computing and Applications)
- EEG-based emotion recognition model using fuzzy adjacency matrix combined with convolutional multi-head graph attention mechanism(Mingwei Cao, Yindong Dong, Deli Chen, Guodong Wu, Gao-Jie Xu, Jun Zhang, 2025, Cluster Computing)
- Conditional probabilistic-based domain adaptation for cross-subject EEG-based emotion recognition(Shichao Cheng, Yifan Wang, Jiawei Mei, Guang Lin, Jianhai Zhang, Wanzeng Kong, 2025, Cognitive Neurodynamics)
- From gram to attention matrices: a monotonicity constrained method for eeg-based emotion classification(Dongyang Kuang, C. Michoski, Wenting Li, Rui Guo, 2023, Applied Intelligence)
针对基于SEED系列数据集的单模态情绪识别研究,当前主流方向已从依赖手工特征的传统机器学习转向基于深层混合架构(CNN/LSTM/Transformer)的时空特征深度建模。同时,领域内日益重视数据质量驱动的AI范式以及轻量化、稳健性在实际部署中的应用,反映了从单纯追求高精度向可解释、可泛化系统开发的演进。
总计26篇相关文献
Research in the biomedical field often faces challenges due to the scarcity and high cost of data, which significantly limit the development and application of machine learning models. This paper introduces a data-centric AI framework for EEG-based emotion recognition that emphasizes improving data quality rather than model complexity. Instead of proposing a deep architecture, we demonstrate how participant-guided noise filtering combined with systematic data augmentation can substantially enhance system performance across multiple classification settings: binary (high vs. low arousal), four-quadrant emotions, and seven discrete emotions. Using the SEED-VII dataset, we show that these strategies consistently improve accuracy and F1 scores, achieving competitive or superior performance compared to more sophisticated published models. The findings highlight a practical and reproducible pathway for advancing biomedical AI systems, showing that prioritizing data quality over architectural novelty yields robust and generalizable improvements in emotion recognition.
Objective. Consumer-grade EEG devices have the potential for widespread brain–computer interface deployment but pose significant challenges for emotion recognition due to reduced spatial coverage and the variable signal quality encountered in uncontrolled deployment environments. While deep learning approaches have employed increasingly complex architectures, their efficacy in noisy consumer-grade signals and cross-system generalizability remains unexplored. We present a comprehensive systematic comparison of EEGNet architecture, which has become a benchmark model for consumer-grade EEG analysis versus traditional machine learning, examining when and why domain-specific feature engineering outperforms end-to-end learning in resource constrained scenarios. Approach. We conducted comprehensive within-dataset evaluation using the DREAMER dataset (23 subjects, Emotiv EPOC 14-channel) and challenging cross-dataset validation (DREAMER→SEED-VII transfer). Traditional ML employed domain-specific feature engineering (statistical, frequency-domain, and connectivity features) with random forest classification. Deep learning employed both optimized and enhanced EEGNet architectures, specifically designed for low channel consumer EEG systems. For cross-dataset validation, we implemented progressive domain adaptation combining anatomical channel mapping, CORAL adaptation, and TCA subspace learning. Statistical validation included 345 comprehensive evaluations with fivefold cross-validation × 3 seeds × 23 subjects, Wilcoxon signed-rank tests, and Cohen’s d effect size calculations. Main results. Traditional ML achieved superior within-dataset performance (F1 = 0.945 ± 0.034 versus 0.567 for EEGNet architectures, p < 0.000001, Cohen’s d = 3.863, 67% improvement) across 345 evaluations. Cross-dataset validation demonstrated good performance (F1 = 0.619 versus 0.007) through systematic domain adaptation. Progressive improvements included anatomical channel mapping (5.8× improvement), CORAL domain adaptation (2.7× improvement), and TCA subspace learning (4.5× improvement). Feature analysis revealed inter-channel connectivity patterns contributed 61% of the discriminative power. Traditional ML demonstrated superior computational efficiency (95% faster training, 10× faster inference) and excellent stability (CV = 0.036). Fairness validation experiments supported the advantage of traditional ML in its ability to persist even with minimal feature engineering (F1 = 0.842 vs. 0.646 for enhanced EEGNet), and robustness analysis revealed that deep learning degrades more under consumer-grade noise conditions (17% vs. <1% degradation). Significance. These findings challenge the assumption that architectural complexity universally improves biosignal processing performance in consumer-grade applications. Through the comparison of traditional ML against the EEGNet consumer-grade architecture, we highlight the potential that domain-specific feature engineering and lightweight adaptation techniques can provide superior accuracy, stability, and practical deployment capabilities for consumer-grade EEG emotion recognition. While our empirical comparison focused on EEGNet, the underlying principles regarding data efficiency, noise robustness, and the value of domain expertise could extend to comparisons with other complex architectures facing similar constraints in further research. This comprehensive domain adaptation framework enables robust cross-system deployment, addressing critical gaps in real-world BCI applications.
No abstract available
No abstract available
Recognition of emotion from Electroencephalogram (EEG) signals has become a key area in healthcare and human-computer interaction. The current study introduces a novel method for emotion recognition. Extraction of features from EEG signals is performed using Quantum Local Binary Pattern (QLBP), and further emotions were classified using a Support Vector Machine (SVM). The proposed approach targets single-channel EEG analysis, i.e., T7 channel from the SEED dataset, for classifying emotions into positive, neutral, and negative states. QLBP acquires subtle variations from EEG signals effectively with strong feature representation. Furthermore, the SVM classifier with various kernels, including polynomial, linear, Radial Basis Function, Gaussian, and with different neighbor values, was tested. The linear kernel yielded the maximum average accuracy of 94.5% with two neighbors. The findings highlight the advantages of combining QLBP with SVM to determine its feasibility for implementing an efficient real-world mental state estimation system.
Electroencephalogram (EEG) monitors the brain’s electrical activity and carries useful information regarding the subject’s emotional states. Due to the nonstationary and being complex in nature, proper signal-processing techniques are necessary to get meaningful interpretations. The EEG signal has been represented using a graph by incorporating the temporal dependency. In this article, a novel feature based on spectral graph wavelet transform (SGWT) for representing EEG signals has been proposed by considering the interdependency among different samples of EEG signals. SGWT is effective in finding multiscale information at the local level as well as the global level. These multiscale representations allow for the extraction of information about the EEG signal at different scales. The SGWT coefficients are used to develop machine-learning classifiers for emotion identification. Principal component analysis (PCA) is also used for feature reduction. The proposed framework is evaluated based on a publicly available SEED dataset with the help of extensive experiments. The ${k}$ -nearest neighbor (KNN) classifier provides 97.3% accuracy with a standard deviation of 1.2%. The SGWT-based representation has achieved 12.7% higher accuracy compared to the raw EEG signal, which shows the usefulness of the proposed approach. Our model for emotion recognition attains superior classification performance compared to state-of-the-art methods. Finally, the investigation of interdependency among the samples of EEG signals reveals that the SGWT-based representation of EEG signals is a useful tool for analyzing EEG signals.
The area of Human Emotion Recognition using EEG signals is rapidly evolving its dimensions at a more excellent pace and with time, it has become an important area of research for affective computing in the field of neuroscience. Neuro-computing has also shown its potential applications in the domain of mental health monitoring, brain-computer interface, and adaptive learning systems. The deep learning models have shown significant progress in producing effective results when implemented in analyzing different EEG signals. In this study, the efficiency of Convolutional Neural Network (CNN) models for emotion categorization is investigated on an EEG-based SEED dataset. Differential Entropy (DE) characteristics derived from five important EEG rhythms—delta, theta, alpha, beta, and gamma—are used as inputs to CNN classifiers. To enhance the performance, the model uses a two-dimensional (2D) tensor representation of the input, which allows the network to learn and use spatial correlations between different EEG channels. Experimental results show that the proposed CNN-based strategy outperforms previous methods with an average accuracy of 94.09 %. These findings highlight the potential of CNNs in developing robust and scalable solutions for EEG-based emotion recognition, providing a path for more intuitive and adaptive systems in future applications.
EEG-based emotion recognition experiences complications due to its high-dimensionality and often noised signals and limited labeled data. Like many deep learning approaches, hybrid CNN-LSTM models have been fully evaluated in the literature but often do not take a mixed approach to advanced data augmentation techniques with strong feature selection methods, in a way that improves generalization. This paper builds upon a hybrid approach in CNN-LSTM models, proposes, and implements GAN-based synthetic EEG data generation and PCA-Mutual Information (PCA+MI) feature selection and evaluates them on DEAP and SEED data sets. The base CNN-LSTM models are surpassed by the proposed method, earning 5.2% more accuracy on DEAP and 4.8% on SEED, noting a three-state-of-the-art shortfall of 1.9% accuracy and 3.1 % accuracy on each dataset, respectively. Studies show that the GAN augmentation alone raised accuracy $2-3 \%$. All these improvements in the value of data augmentation combining aspects of GANs with feature selection methods for EEG emotion recognition but also strongly suggest applications behind human interaction in computational BCI systems.
Electroencephalogram (EEG) signals pose a challenge to emotion recognition (ER) tasks due to their complexity and individual differences. Conventional machine learning methods usually rely on handcrafted feature extraction and perform poorly in cross-subject ER. In recent years, deep learning methods have made significant progress in the analysis of EEG signals. However, existing methods still have limitations in the comprehensive modeling of temporal and spatial features and the capture of long-term dependent information. In this paper, we propose a new hybrid model to enhance the accuracy and cross-subject generalization of ER from EEG signals. In particular, the proposed model extracts local spatiotemporal features of EEG signals through convolutional layers. It further captures long-term sequential dependencies through a bidirectional long short-term memory network (BLSTM). The proposed model can achieve more comprehensive modeling of spatiotemporal features. The efficacy of the model was evaluated using the SJTU Emotion EEG Dataset (SEED), a widely used dataset for emotion recognition studies, with a comparison made with traditional machine learning methods and existing deep learning models. The experimental results demonstrate that the proposed hybrid model performs well in terms of accuracy, Kappa coefficient, and F1-score. The proposed model especially shows strong ability in distinguishing cross-subject emotion categories. In addition, ablation experiments verified the key role of the combination of convolution operation and BLSTM in improving model performance. The proposed model is useful for applications in multimodal data fusion and more complex ER tasks.
Background The effective analysis methods for steady-state visual evoked potential (SSVEP) signals are critical in supporting an early diagnosis of glaucoma. Most efforts focused on adopting existing techniques to the SSVEPs-based brain–computer interface (BCI) task rather than proposing new ones specifically suited to the domain. Method Given that electroencephalogram (EEG) signals possess temporal, regional, and synchronous characteristics of brain activity, we proposed a transformer–based EEG analysis model known as EEGformer to capture the EEG characteristics in a unified manner. We adopted a one-dimensional convolution neural network (1DCNN) to automatically extract EEG-channel-wise features. The output was fed into the EEGformer, which is sequentially constructed using three components: regional, synchronous, and temporal transformers. In addition to using a large benchmark database (BETA) toward SSVEP-BCI application to validate model performance, we compared the EEGformer to current state-of-the-art deep learning models using two EEG datasets, which are obtained from our previous study: SJTU emotion EEG dataset (SEED) and a depressive EEG database (DepEEG). Results The experimental results show that the EEGformer achieves the best classification performance across the three EEG datasets, indicating that the rationality of our model architecture and learning EEG characteristics in a unified manner can improve model classification performance. Conclusion EEGformer generalizes well to different EEG datasets, demonstrating our approach can be potentially suitable for providing accurate brain activity classification and being used in different application scenarios, such as SSVEP-based early glaucoma diagnosis, emotion recognition and depression discrimination.
Abstract Emotion recognition using Electroencephalography (EEG) is a convenient and reliable technique. EEG based emotion detection study can find its application in various fields such as defense, aerospace, medical, and many more. This analysis helps to understand the emotional state of mind. There are two approaches to study EEG analysis known as subject dependent and independent. In this paper, Modified Differential Entropy (MD-DE) feature extractor is proposed to detect nonlinearity and non-Gaussianity of the EEG signal. The paper adopts both approaches by conducting an EEG analysis on own generated database named as ‘IDEA- Intellect Database for Emotion Analysis’ on 14 subjects. In this work, bidirectional long short-term memory (BiLSTM) network and multilayer perceptron (MLP) network is used to classify emotional state of mind of the subjects. On the ‘IDEA’ database, subject dependent average accuracy achieved is in the order of 98.5% and for subject independent, 88.57%. To reaffirm the improvement in accuracy level, a new approach of Modified Differential Entropy and BiLSTM network is applied on the openly available SEED and DEAP database as well. This experiment established that the average accuracy of emotion detection using MD-DE and BiLSTM network is better than the established methods.
The rapid advancement of artificial intelligence (AI) has spurred innovation across various domains—information technology, medicine, education, and the social sciences—and is likewise creating new opportunities in architecture for understanding human–environment interactions. This study aims to develop a fine-tuned AI model that leverages electroencephalography (EEG) data to analyse users’ emotional states in real time and apply these insights to architectural spaces. Specifically, the SEED dataset—an EEG-based emotion recognition resource provided by the BCMI laboratory at Shanghai Jiao Tong University—was employed to fine-tune the ChatGPT model for classifying three emotional states (positive, neutral, and negative). Experimental results demonstrate the model’s effectiveness in differentiating these states based on EEG signals, although the limited number of participants confines our findings to a proof of concept. Furthermore, to assess the feasibility of the proposed approach in real architectural contexts, we integrated the model into a 360° virtual reality (VR) setting, where it showed promise for real-time emotion recognition and adaptive design. By combining AI-driven biometric data analysis with user-centred architectural design, this study aims to foster sustainable built environments that respond dynamically to human emotions. The results underscore the potential of EEG-based emotion recognition for enhancing occupant experiences and provide foundational insights for future investigations into human–space interactions.
No abstract available
No abstract available
No abstract available
No abstract available
Emotional states play a crucial role in shaping decision-making and social interactions, with sentiment analysis becoming an essential technology in human–computer emotional engagement, garnering increasing interest in artificial intelligence research. In EEG-based emotion analysis, the main challenges are feature extraction and classifier design, making the extraction of spatiotemporal information from EEG signals vital for effective emotion classification. Current methods largely depend on machine learning with manual feature extraction, while deep learning offers the advantage of automatic feature extraction and classification. Nonetheless, many deep learning approaches still necessitate manual preprocessing, which hampers accuracy and convenience. This paper introduces a novel deep learning technique that integrates multi-scale convolution and bidirectional long short-term memory networks with an attention mechanism for automatic EEG feature extraction and classification. By using raw EEG data, the method applies multi-scale convolutional neural networks and bidirectional long short-term memory networks to extract and merge features, selects key features via an attention mechanism, and classifies emotional EEG signals through a fully connected layer. The proposed model was evaluated on the SEED dataset for emotion classification. Experimental results demonstrate that this method effectively classifies EEG-based emotions, achieving classification accuracies of 99.44% for the three-class task and 99.85% for the four-class task in single validation, with average 10-fold-cross-validation accuracies of 99.49% and 99.70%, respectively. These findings suggest that the MSBiLSTM-Attention model is a powerful approach for emotion recognition.
Human emotion is a core link between cognition, behavior, and physiology, and its accurate recognition is crucial for advancing the development of intelligent human-computer interaction, mental health diagnosis, and other related fields. Current research mostly achieves multi-domain feature fusion through simple concatenation or weighted fusion at the algorithmic level, failing to fully reveal and exploit the feature contribution rates associated with emotional processing. To address the aforementioned issues, this study first conducts sufficient feature extraction across the temporal, frequency, and spatial domains, and then proposes a hybrid encoder model integrating 3D-CNN, LSTM, and an attention mechanism, named CLANet. This model can not only capture local dynamic patterns but also integrate global spatial configurations, thereby providing a novel approach for emotion recognition and improving recognition accuracy. Experiments conducted on the SEED IV dataset demonstrate that the proposed CLANet model achieves a test accuracy of 93.8%, outperforming state-of-the-art models such as support vector machines (SVMs), BiLSTM, Hierarchical LSTM, and EEGNet. Furthermore, the fusion of multi-domain features (temporal, frequency, and spatial) significantly enhances recognition performance, achieving a maximum accuracy of 94.0% in the θ band. This study provides a more physiologically relevant architecture for EEG-based emotion recognition and offers technical support for its practical applications in related fields.
Electroencephalogram (EEG) is a powerful tool for monitoring the brain’s electrical activity, providing valuable insights into an individual’s emotional state. The task of emotion recognition from EEG signals has gained significant attention, particularly with the advent of deep learning techniques. However, challenges such as the inherent instability of EEG signals and insufficient feature extraction methods can hinder effective recognition. In this study, we propose a novel approach for EEG emotion recognition—the multidomain coupled spatiotemporal feature interaction model (MCSFIM). First, leveraging the energy aggregation characteristics of the fractional Fourier transform (FrFT), we extract the features of the EEG signal fractional power spectral density (FrPSD) from multidomain to address the nonstationarity of EEG signals and construct a more comprehensive feature set. Then, by integrating graph convolutional networks (GCNs) and bidirectional long short-term memory (BiLSTM) networks, we capture the spatial topology and temporal dependencies of EEG signals for feature interaction and classification. To further enhance classification accuracy, we propose a dynamically weighted loss (DWL) function to reduce interclass imbalance and achieve more precise emotion recognition. Extensive experimental results on the DEAP and SEED datasets demonstrate that the proposed method outperforms other state-of-the-art methods.
No abstract available
Emotion recognition based on EEG (electroencephalography) has been widely used in human-computer interaction, distance education and health care. However, the conventional methods ignore the adjacent and symmetrical characteristics of EEG signals, which also contain salient information related to emotion. In this paper, a spatial folding ensemble network (SFE-Net) is presented for EEG feature extraction and emotion recognition. Firstly, for the undetected area between EEG electrodes, an improved Bicubic-EEG interpolation algorithm is developed for EEG channels information completion, which allows us to extract a wider range of adjacent space features. Then, motivated by the spatial symmetric mechanism of human brain, we fold the input EEG channels data with five different symmetrical strategies, which enable the proposed network to extract the information of space features of EEG signals more effectively. Finally, a 3DCNN-based spatial, temporal extraction, and a multi-voting strategy of ensemble learning are integrated to model a new neural network. With this network, the spatial features of different symmetric folding signals can be extracted simultaneously, which greatly improves the robustness and accuracy of emotion recognition. The experimental results on DEAP and SEED datasets show that the proposed algorithm has comparable performance in terms of recognition accuracy.
Emotion, as a high-level function of the human brain, has a great impact on people’s mental health. To fully con-sider EEG signals’ spatial information and time-frequency information, and realize human-computer interaction better. This paper proposes an improved DenseNet emotion recognition model based on 3D feature map. By extracting the differential entropy features of the θ, α, β and γ frequency bands of the EEG signals, and combining the position mapping relationship of the EEG channel electrodes, a three-dimensional feature map is constructed, and then the improved densely connected convolutional network (DenseNet) is used for secondary feature extraction and classification. To verify the effectiveness of this method, a classification experiment including positive, neutral and negative emotions is carried out on the SEED data set. The classification accuracy rates obtained in the single-subject experiment and the all-subject experiment are 98.51% and 98.68%, respectively. The experimental results show that the method of 3D feature map combined with feature reuse can get high-precision classification results, which provides a new direction for emotion recognition.
No abstract available
No abstract available
No abstract available
No abstract available
针对基于SEED系列数据集的单模态情绪识别研究,当前主流方向已从依赖手工特征的传统机器学习转向基于深层混合架构(CNN/LSTM/Transformer)的时空特征深度建模。同时,领域内日益重视数据质量驱动的AI范式以及轻量化、稳健性在实际部署中的应用,反映了从单纯追求高精度向可解释、可泛化系统开发的演进。