| 引用本文: | 颜亨,周正康,何新生,李俊华,袁振,王洪涛.动态图–时间卷积神经网络EEG-fNIRS多模态运动想象/执行解码[J].控制理论与应用,2026,43(3):661~670.[点击复制] |
| YAN Heng,ZHOU Zheng-kang,HE Xin-sheng,LI Jun-hua,YUAN Zhen,WANG Hong-tao.Dynamic graph-temporal convolutional neural network for EEG-fNIRS multimodal motor imagery/execution dcoding[J].Control Theory & Applications,2026,43(3):661~670.[点击复制] |
|
| 动态图–时间卷积神经网络EEG-fNIRS多模态运动想象/执行解码 |
| Dynamic graph-temporal convolutional neural network for EEG-fNIRS multimodal motor imagery/execution dcoding |
| 摘要点击 485 全文点击 77 投稿时间:2023-12-04 修订日期:2026-01-15 |
| 查看全文 查看/发表评论 下载PDF阅读器 HTML |
| DOI编号 10.7641/CTA.2024.30779 |
| 2026,43(3):661-670 |
| 中文关键词 脑机接口 运动想象 功能性近红外光谱 动态图卷积神经网络 时间卷积网络 |
| 英文关键词 brain-computer interface motor imagery functional near-infrared spectroscopy dynamic graph convolu tional neural network temporal convolutional network |
| 基金项目 广东省科技厅国际合作项目(2023A0505050144),五邑大学大学生创新创业训练计划项目(2021CX07,2024039)资助. |
|
| 中文摘要 |
| 本文提出一种基于动态图卷积和时间卷积的深度学习模型,用于联合分析脑电图、功能性近红外光谱多模
态信号,以实现空间信息和时间信息的互补.具体为:首先,利用锁相值方法分别确定脑电图、功能性近红外光谱通
道间的图结构信息;其次,将经过预处理的脑电图和功能性近红外光谱数据分别输入卷积层;再次,将这些由卷积
层输出的特征信息和图结构信息输入动态图卷积神经网络进行处理,进一步通过一层时间卷积分别提取两种数据
的时间特征,将输出结果进行拼接输入至一层卷积中进行特征层面融合;最后,通过全连接层得到融合后的分类结
果. 为评估所提模型的性能,采用3个数据集进行测试.实验结果表明,本模型在3个数据集的分类结果均优于脑电图
分类结果和功能性近红外光谱分类结果.消融实验亦验证了本模型具有较强的鲁棒性. |
| 英文摘要 |
| In this study, a new deep learning model was proposed in this paper, which integrates dynamic graph con
volution and temporal convolution for the joint analysis of electroencephalogram (EEG) and functional near-infrared spec
troscopy (fNIRS) signals, achieving the complementarity of spatial and temporal information. The method is detailed as
follows: Firstly, the phase locking value method is employed to determine the graph structure information between the
channels of EEG and fNIRS. Next, the preprocessed EEG and fNIRS data are fed into convolutional layers separately.
Then, the feature information and graph structure information output by these convolutional layers is fed into a dynamic
graph convolutional neural network for processing. Subsequently, a layer of temporal convolution extracts the temporal
features of both modalities, and the output results are concatenated. These are then fed into another convolutional layer
for fusion. Finally, the fused classification results are obtained through a fully connected layer. Three datasets are utilized
to evaluate the performance of the proposed model. The experimental results show that the multimodal classification ac
curacies of this model on the three datasets surpasses the single-modality classification performances of EEG and fNIRS.
Ablation experiments also verify the robustness of the model. |
|
|
|
|
|