Electroencephalogram(EEG)-based emotion recognition attracts increasing attention due to its broad applications in mental health monitoring and brain-computer interfaces. However, existing methods still suffer from inadequate utilization of electrode spatial information, limited labeled data, and cross-domain distribution shifts. To address these issues, an uncertainty-aware prototypical learning method for cross-domain emotion recognition is proposed. First, a position encoding-guided graph semi-supervised module is designed. The spatial topology is embedded into the adjacency matrix through incorporating sine-cosine positional encoding. Thus, the limitation of traditional graph constructions based solely on physical distance is broken through, and physiological priors of EEG signals are effectively integrated. Based on the above, the graph-based feature propagation mechanism is leveraged to collaboratively learn deep topological representations from both labeled and unlabeled samples, mitigating the challenge of label scarcity. Second, an uncertainty-aware prototype learning module is constructed to dynamically refine emotion prototypes by quantifying feature reliability. Thereby, noise and cross-domain shifts are suppressed while generalization ability of the model across different domains is enhanced. Experiments on multiple public EEG emotion datasets demonstrate that the proposed method outperforms state-of-the-art methods on cross-domain recognition tasks.
[1] PILLALAMARRI R, SHANMUGAM U.A Review on EEG-Based Multi-modal Learning for Emotion Recognition. Artificial Intelli-gence Review, 2025, 58(5). DOI: 10.1007/s10462-025-11126-9.
[2] SAMAL P, HASHMI M F.Role of Machine Learning and Deep Learning Techniques in EEG-Based BCI-Emotion Recognition Sys-tem: A Review. Artificial Intelligence Review, 2024, 57(3). DOI: 10.1007/s10462-023-10690-2.
[3] ZOTEV V, MAYELI A, MISAKI M, et al. Emotion Self-Regulation Training in Major Depressive Disorder Using Simultaneous Real-Time fMRI and EEG Neurofeedback. NeuroImage: Clinical, 2020, 27. DOI: 10.1016/j.nicl.2020.102331.
[4] WU E Q, DENG P Y, QU X Y, et al. Detecting Fatigue Status of Pilots Based on Deep Learning Network Using EEG Signals. IEEE Transactions on Cognitive and Developmental Systems, 2021, 13(3): 575-585.
[5] LI J Y, YANG L, LÜ C, et al. GLF-STAF: A Global-Local-Facial Spatio-Temporal Attention Fusion Approach for Driver Emotion Re-cognition. IEEE Transactions on Consumer Electronics, 2025, 71(2): 3486-3497.
[6] ZHENG W L, ZHU J Y, LU B L, et al. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Transactions on Affective Computing, 2019, 10(3): 417-429.
[7] ALARCÃO S M, FONSECA M J. Emotions Recognition Using EEG Signals: A Survey. IEEE Transactions on Affective Computing, 2019, 10(3): 374-393.
[8] SI X P, HE H, YU J Y, et al. Cross-Subject Emotion Recognition Brain-Computer Interface Based on fNIRS and DBJNet. Cyborg and Bionic Systems, 2023, 4. DOI: 10.34133/cbsystems.0045.
[9] 刘少鹏,洪佳明,梁杰鹏,等.面向医学图像分割的半监督条件生成对抗网络.软件学报, 2020, 31(8): 2588-2602.
(LIU S P, HONG J M, LIANG J P, et al. Medical Image Segmentation Using Semi-supervised Conditional Generative Adversarial Nets. Journal of Software, 2020, 31(8): 2588-2602.)
[10] ZHOU R S, YE W S, ZHANG Z G, et al. EEGMatch: Learning with Incomplete Labels for Semi-supervised EEG-Based Cross-Subject Emotion Recognition. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(7): 12991-13005.
[11] ZHANG G Y, ETEMAD A.Holistic Semi-supervised Approaches for EEG Representation Learning // Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Washington, USA: IEEE, 2022: 1241-1245.
[12] BERTHELOT D, CARLINI N, GOODFELLOW I, et al. MixMa-tch: A Holistic Approach to Semi-supervised Learning // Proc of the 33rd International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2019: 5049-5059.
[13] ZHANG G Y, DAVOODNIA V, ETEMAD A.PARSE: Pairwise Alignment of Representations in Semi-supervised EEG Learning for Emotion Recognition. IEEE Transactions on Affective Computing, 2022, 13(4): 2185-2200.
[14] 李锦瑶,杜肖兵,朱志亮,等.脑电情绪识别的深度学习研究综述.软件学报, 2023, 34(1): 255-276.
(LI J Y, DU X B, ZHU Z L, et al. Deep Learning for EEG-Based Emotion Recognition: A Survey. Journal of Software, 2023, 34(1): 255-276.)
[15] YANG H, CHEN C L P, CHEN B N, et al. Improving the Interpretability Through Maximizing Mutual Information for EEG Emo-tion Recognition. IEEE Transactions on Affective Computing, 2025, 16(2): 744-757.
[16] ZHONG P X, WANG D, MIAO C Y.EEG-Based Emotion Recognition Using Regularized Graph Neural Networks. IEEE Transactions on Affective Computing, 2022, 13(3): 1290-1301.
[17] ZHANG T, WANG X H, XU X M, et al. GCB-Net: Graph Con-volutional Broad Network and Its Application in Emotion Recognition. IEEE Transactions on Affective Computing, 2022, 13(1): 379-388.
[18] SONG T F, LIU S Y, ZHENG W M, et al. Variational Instance-Adaptive Graph for EEG Emotion Recognition. IEEE Transactions on Affective Computing, 2023, 14(1): 343-356.
[19] LI Y, CHEN J, LI F, et al. GMSS: Graph-Based Multi-task Self-Supervised Learning for EEG Emotion Recognition. IEEE Transactions on Affective Computing, 2023, 14(3): 2512-2525.
[20] JIN M, DU C D, HE H G, et al. PGCN: Pyramidal Graph Convolutional Network for EEG Emotion Recognition. IEEE Transactions on Multimedia, 2024, 26: 9070-9082.
[21] YAN H C, GUO K L, XING X F, et al. Bridge Graph Attention Based Graph Convolution Network with Multi-scale Transformer for EEG Emotion Recognition. IEEE Transactions on Affective Computing, 2024, 15(4): 2042-2054.
[22] AN Y L, HU S H, LIU S Q, et al. LGDAAN-Nets: A Local and Global Domain Adversarial Attention Neural Networks for EEG Emo-tion Recognition. Knowledge-Based Systems, 2025, 318. DOI: 10.1016/j.knosys.2025.113613.
[23] PAN D, ZHENG H H, XU F F, et al. MSFR-GCN: A Multi-scale Feature Reconstruction Graph Convolutional Network for EEG Emotion and Cognition Recognition. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 3245-3254.
[24] GAO H X, WANG X Y, CHEN Z H, et al. Graph Convolutional Network with Connectivity Uncertainty for EEG-Based Emotion Recognition. IEEE Journal of Biomedical and Health Informatics, 2024, 28(10): 5917-5928.
[25] YE W S, ZHANG Z G, TENG F, et al. Semi-supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition. IEEE Transactions on Affective Computing, 2025, 16(1): 290-305.
[26] 班瑞阳,周大鹏,韩吉平,等.基于改进注意力机制的领域对抗网络的认知负荷识别模型.小型微型计算机系统, 2024, 45(11): 2602-2608.
(BAN R Y, ZHOU D P, HAN J P, et al. Cognitive Load Recognition Model Based on Improved Attention Mechanism in Domain-Adversarial Networks. Journal of Chinese Computer Systems, 2024, 45(11): 2602-2608.)
[27] 李平,宋琦,张锋.引入局部特征对齐和原型修正的少样本图像分类模型.小型微型计算机系统, 2025, 46(9): 2145-2152.
(LI P, SONG Q, ZHANG F.Few-Shot Image Classification Model by Introducing Local Feature Alignment and Prototype Rectification. Journal of Chinese Computer Systems, 2025, 46(9): 2145-2152.)
[28] 徐祺津,叶海良,曹飞龙,等.基于全局-局部先验和纹理细节关注的图像修复.模式识别与人工智能, 2025, 38(2): 101-115.
(XU Q J, YE H L, CAO F L, et al. Image Inpainting Based on Global-Local Prior and Details. Pattern Recognition and Artificial Intelligence, 2025, 38(2): 101-115.)
[29] 侯冰震,张桂梅,彭昆.基于不确定性引导和尺度一致性的肾肿瘤图像分割方法.模式识别与人工智能, 2023, 36(2): 95-107.
(HOU B Z, ZHANG G M, PENG K.Kidney Tumor Image Segmentation Method Based on Uncertainty Guidance and Scale Consistency. Pattern Recognition and Artificial Intelligence, 2023, 36(2):95-107.)
[30] ANDRIC M, HASSON U.Global Features of Functional Brain Networks Change with Contextual Disorder. NeuroImage, 2015, 117: 103-113.
[31] LIU X H, LU B L, ZHENG W L.EEGMirror: Leveraging EEG Data in the Wild via Montage-Agnostic Self-Supervision for EEG to Video Decoding // Proc of the IEEE/CVF International Conference on Computer Vision. Washington, USA: IEEE, 2025: 18273-18283.
[32] DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering // Proc of the 30th International Conference on Neural In-formation Processing Systems. Cambridge, USA: MIT Press, 2016: 3844-3852.
[33] ZHOU R S, ZHANG Z G, FU H, et al. PR-PL: A Novel Proto-typical Representation Based Pairwise Learning Framework for Emo-tion Recognition Using EEG Signals. IEEE Transactions on Affec-tive Computing, 2024, 15(2): 657-670.
[34] GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research, 2016, 17(59): 1-35.
[35] PINHEIRO P O.Unsupervised Domain Adaptation with Similarity Learning // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 8004-8013.
[36] BEN-DAVID S, BLITZER J, CRAMMER K, et al. Analysis of Representations for Domain Adaptation[C/OL].[2025-09-25]. https://proceedings.neurips.cc/paper_files/paper/2006/file/b1b0432ceafb0ce714426e9114852ac7-Paper.pdf.
[37] ZHENG W L, LU B L.Investigating Critical Frequency Band Sand Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Transactions on Autonomous Mental Development, 2015, 7(3): 162-175.
[38] ZHENG W L, LIU W, LU Y F, et al. EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Transactions on Cybernetics, 2019, 49(3):1110-1122.
[39] SONG T F, ZHENG W M, SONG P, et al. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing, 2020, 11(3): 532-541.
[40] CHEN Y Y, XU X D, QIN X W.Cross-Subject and Cross-Session EEG Emotion Recognition Based on Multi-source Structural Deep Clustering. IEEE Transactions on Cognitive and Developmental Systems, 2025, 17(5): 1245-1259.
[41] ZHANG B W, WANG Y D, HOU W H, et al. FlexMatch: Boosting Semi-supervised Learning with Curriculum Pseudo Labeling // Proc of the 35th International Conference on Neural Information Processing Systems. Cambridge, USA: MIT Press, 2021: 18408-18419.
[42] YIN Z, ZHAO M Y, WANG Y X, et al. Recognition of Emotions Using Multimodal Physiological Signals and an Ensemble Deep Learning Model. Computer Methods and Programs in Biomedicine, 2017, 140: 93-110.