脑电 脊髓电信号 运动意图感知
脑-脊髓接口(BSI)与脊髓神经驱动解码
该组文献聚焦于建立大脑与脊髓之间的直接通信(数字桥梁),通过解码脊髓运动神经元的放电活动、脊髓原语或直接记录脊髓电信号(SCP)来绕过损伤部位,实现运动功能的重建。研究涵盖了动物模型验证及人体临床实证。
- Walking naturally after spinal cord injury using a brain–spine interface(H. Lorach, Andrea Gálvez, Valeria Spagnolo, Félix Martel, Serpil Karakas, Nadine Intering, M. Vat, Olivier Faivre, C. Harte, Salif Komi, Jimmy Ravier, Thibault Collin, Laure Coquoz, Icare Sakr, Edeny Baaklini, Sergio D. Hernandez-Charpak, Grégory Dumont, R. Buschman, Nicholas Buse, Tim Denison, I. V. van Nes, L. Asboth, A. Watrin, L. Struber, F. Sauter-Starace, L. Langar, V. Auboiroux, S. Carda, S. Chabardès, T. Aksenova, Robin Demesmaeker, G. Charvet, J. Bloch, G. Courtine, 2023, Nature)
- Neuromorphic Decoding of Spinal Motor Neuron Behaviour During Natural Hand Movements for a New Generation of Wearable Neural Interfaces(Simone Tanzarella, Massimiliano Iacono, Elisa Donati, D. Farina, C. Bartolozzi, 2023, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Real-Time Myoelectric-Based Neural-Drive Decoding for Concurrent and Continuous Control of Robotic Finger Forces(Long Meng, Luis Vargas, Derek G. Kamper, Xiaogang Hu, 2025, IEEE Transactions on Human-Machine Systems)
- Detection and evaluation of brain spinal cord conduction function based on functional electrical stimulation(Lei Ma, Feng Ju, Kaixin Pan, C. Tao, X. Shen, 2019, 2019 IEEE Integrated STEM Education Conference (ISEC))
- Source localization of simulated neural signals in a cervical spinal cord model(M. Oberndorfer, Gernot R. Müller-Putz, 2026, Journal of Neural Engineering)
- The Volitional Control of Individual Motor Units Is Constrained within Low-Dimensional Neural Manifolds by Common Inputs(J. Rossato, S. Avrillon, K. Tucker, Dario Farina, François Hug, 2024, The Journal of Neuroscience)
- Estimation of Neuromuscular Primitives from EEG Slow Cortical Potentials in Incomplete Spinal Cord Injury Individuals for a New Class of Brain-Machine Interfaces(A. Úbeda, J. Azorín, D. Farina, Massimo Sartori, 2018, Frontiers in Computational Neuroscience)
- Integration of brain-computer interfaces with sacral nerve stimulation: a vision for closed-loop, volitional control of bladder function in neurogenic patients through real-time cortical signal modulation and peripheral neuro-stimulation(Ali Aamir, Munaim Siddiqui, 2025, World Journal of Urology)
- Concurrent spinal and brain imaging with optically pumped magnetometers(Lydia C. Mardell, G. O’Neill, T. Tierney, R. Timms, C. Zich, G. Barnes, S. Bestmann, 2022, bioRxiv)
- A Brain–Spinal Interface Alleviating Gait Deficits after Spinal Cord Injury in Primates(M. Capogrosso, T. Milekovic, D. Borton, Fabien B. Wagner, E. M. Moraud, Jean-Baptiste Mignardot, Nicolas Buse, Jérôme Gandar, Q. Barraud, David Xing, Elodie Rey, S. Duis, Jianzhong Yang, W. K. D. Ko, Qin Li, P. Detemple, Tim Denison, S. Micera, E. Bézard, J. Bloch, G. Courtine, 2016, Nature)
- Prediction of Forelimb EMGs and Movement Phases from Corticospinal Signals in the Rat During the Reach-to-Pull Task(Sinan Gok, M. Sahin, 2019, International journal of neural systems)
- Can motor volition be extracted from the spinal cord?(Abhishek Prasad, M. Sahin, 2012, Journal of NeuroEngineering and Rehabilitation)
- Electronic bypass of spinal lesions: activation of lower motor neurons directly driven by cortical neural signals(Yan Li, Monzurul Alam, Shanshan Guo, K. Ting, Jufang He, 2014, Journal of NeuroEngineering and Rehabilitation)
- A New Nonlinear Autoregressive Exogenous (NARX)-based Intra-spinal Stimulation Approach to Decode Brain Electrical Activity for Restoration of Leg Movement in Spinally-injured Rabbits(Mohamad Amin Younessi Heravi, K. Maghooli, Fereidoun Nowshiravan Rahatabad, R. Rezaee, 2023, Basic and Clinical Neuroscience)
- Brain-Computer-Spinal Interface Restores Upper Limb Function After Spinal Cord Injury(S. Samejima, Abed Khorasani, V. Ranganathan, Jared Nakahara, Nicholas M. Tolley, Adrien Boissenin, V. Shalchyan, M. Daliri, Joshua R. Smith, C. Moritz, 2021, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Implantable brain–computer interface for neuroprosthetic-enabled volitional hand grasp restoration in spinal cord injury(Iahn Cajigas, K. Davis, Benyamin Meschede-Krasa, N. Prins, S. Gallo, J. Naeem, Anne E. Palermo, A. Wilson, Santiago Guerra, Brandon Parks, Lauren L. Zimmerman, K. Gant, A. Levi, W. Dietrich, Letitia D. Fisher, S. Vanni, John Tauber, Indie C. Garwood, John H. Abel, E. Brown, Michael E. Ivan, Abhishek Prasad, J. Jagid, 2021, Brain Communications)
- Brain-spine interface for movement restoration after spinal cord injury(T. LakshmiPriya, S. Gopinath, 2024, Brain & Spine)
基于深度学习与神经形态计算的高级解码模型
此类研究利用前沿AI技术(如Transformer、图卷积网络GCN、脉冲神经网络SNN、注意力机制)处理高维、非平稳的EEG/ECoG数据。重点解决跨受试者泛化、特征自动提取以及在边缘计算平台上的实时性与低功耗实现。
- Spiking Neural Network Approach for Binary Classification of Hand Movements of Spinal Cord Injured Patients(Md. Shafiul Islam Joy, Mehdi Hasan Chowdhury, Kamrul Hasan, Sagar Mutsuddi, Q. D. Hossain, 2025, 2025 International Conference on Electrical, Computer and Communication Engineering (ECCE))
- A Spiking Neural Network Approach for Classifying Hand Movement and Relaxation from EEG Signal using Time Domain Features(Mohammad Rubaiyat Tanvir Hossain, Md. Shafiul Islam Joy, Mohammed Hasibul Hasan Chowdhury, 2025, WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE)
- Motor Imagery EEG Decoding Based on TS-former for Spinal Cord Injury Patients.(Fangzhou Xu, Yitai Lou, Yunqing Deng, Zhixiao Lun, Pengcheng Zhao, Di Yan, Zhe Han, Zhi-Cai Wu, Chao Feng, Lei Chen, Jiancai Leng, 2025, Brain research bulletin)
- Coherence based graph convolution network for motor imagery-induced EEG after spinal cord injury(Han Li, Ming Liu, Xin Yu, Jian-guo Zhu, Chongfeng Wang, Xinyi Chen, Chao Feng, Jiancai Leng, Yang Zhang, Fangzhou Xu, 2023, Frontiers in Neuroscience)
- Recognition of EEG-based movement intention combined with channel selection adopting deep learning methods(Jixiang Li, Zhaoxuan Wang, Yurong Li, 2024, Journal of Instrumentation)
- Physiology-Inspired EEG Transformer for Predicting Movement Transitions in Bimanual Tasks.(Tianyu Jia, Haiyang Long, Ciarán McGeady, Xingchen Yang, Francesca Colacrai, Jiarong Wang, Linhong Ji, Chong Li, D. Farina, 2025, IEEE journal of biomedical and health informatics)
- MSHANet: A Multiscale Hybrid Attention Network for Motor Imagery EEG Decoding.(Yanlong Zhao, Dianguo Cao, Haoyang Yu, Guangjin Liang, Zhicheng Chen, 2026, IEEE transactions on bio-medical engineering)
- Time–frequency–space transformer EEG decoding for spinal cord injury(Fangzhou Xu, Ming Liu, Xinyi Chen, Yihao Yan, Jinzhao Zhao, Yanbing Liu, Jiaqi Zhao, Shaopeng Pang, Sen Yin, Jiancai Leng, Yang Zhang, 2024, Cognitive Neurodynamics)
- EEG decoding method based on multi-feature information fusion for spinal cord injury(Fangzhou Xu, Jincheng Li, Gege Dong, Jianfei Li, Xinyi Chen, Jian-guo Zhu, Jinglu Hu, Yang Zhang, Shouwei Yue, Dong Wen, Jiancai Leng, 2022, Neural networks : the official journal of the International Neural Network Society)
- Generalizable Movement Intention Recognition with Multiple Heterogeneous EEG Datasets(Xiao Gu, Jinpei Han, Guang-Zhong Yang, Benny P. L. Lo, 2023, 2023 IEEE International Conference on Robotics and Automation (ICRA))
- Cortical-SSM: A Deep State Space Model for EEG and ECoG Motor Imagery Decoding(Shuntaro Suzuki, Shunya Nagashima, Masayuki Hirata, Komei Sugiura, 2025, ArXiv)
- Improved Automatic Deep Model for Automatic Detection of Movement Intention from EEG Signals(Lida Zare Lahijan, Saeed Meshgini, R. Afrouzian, S. Danishvar, 2025, Biomimetics)
- A multi‐feature fusion graph attention network for decoding motor imagery intention in spinal cord injury patients(Jiancai Leng, Licai Gao, Xiuquan Jiang, Yitai Lou, Yuan Sun, Chen Wang, Jun Li, Heng Zhao, Chao Feng, Fangzhou Xu, Yang Zhang, Tzyy-Ping Jung, 2024, Journal of Neural Engineering)
- Temporal-spatial convolutional residual network for decoding attempted movement related EEG signals of subjects with spinal cord injury(Hamed Mirzabagherian, M. Menhaj, A. Suratgar, Nasibeh Talebi, Mohammad Reza Abbasi Sardari, Atena Sajedin, 2023, Computers in biology and medicine)
- Event-Driven Edge Deep Learning Decoder for Real-Time Gesture Classification and Neuro-Inspired Rehabilitation Device Control(Mustapha Deji Dere, Ji-Hun Jo, Boreom Lee, 2023, IEEE Transactions on Instrumentation and Measurement)
- Realtime-Capable Hybrid Spiking Neural Networks for Neural Decoding of Cortical Activity(Jann Krausse, Alexandru Vasilache, Klaus Knobloch, Juergen Becker, 2025, 2025 Neuro Inspired Computational Elements (NICE))
运动前意图识别与多模态信号融合特征工程
侧重于在运动实际发生前(Pre-movement)提取神经特征(如MRCP、频域振荡),并结合EEG、EMG、眼动等多模态信号增强感知的鲁棒性。涉及先进的数学变换(EMD、VMD)和时变自回归模型,以实现早期预判。
- Subject-independent trajectory prediction using pre-movement EEG during grasp and lift task(Anant Jain, Lalan Kumar, 2022, Biomed. Signal Process. Control.)
- Prediction of gait intention from pre-movement EEG signals: a feasibility study(S. M. Shafiul Hasan, Masudur R. Siddiquee, Roozbeh Atri, Rodrigo Ramon, J. Marquez, Ou Bai, 2020, Journal of NeuroEngineering and Rehabilitation)
- Exploring EEG spectral and temporal dynamics underlying a hand grasp movement(Sandeep Bodda, Shyam Diwakar, 2022, PLoS ONE)
- Continuous detection of the self-initiated walking pre-movement state from EEG correlates without session-to-session recalibration(A. Sburlea, L. Montesano, J. Minguez, 2015, Journal of Neural Engineering)
- Decoding of movement-related cortical potentials at different speeds(Jing Zhang, Cheng Shen, Weihai Chen, Xinzhi Ma, Zilin Liang, Yue Zhang, 2024, Cognitive Neurodynamics)
- EMD and VMD in Pre-Movement EEG Signal Analysis: A Hybrid Mode Selection to Classify Upper Limb Complex Movements Using Statistical Features(Beenish Khalid, Ali Hassan, E. Munir, I. Niazi, 2023, 2023 IEEE 20th International Conference on Smart Communities: Improving Quality of Life using AI, Robotics and IoT (HONET))
- Towards decoding motor imagery from EEG signal using optimized back propagation neural network with honey badger algorithm(Zainab Hadi-Saleh, Mohammad Mosleh, Mohamed Adel Al-Shahe, M. Mosleh, 2025, Scientific Reports)
- Metric Learning in Freewill EEG Pre-Movement and Movement Intention Classification for Brain Machine Interfaces(W. Plucknett, L. G. Sanchez Giraldo, Jihye Bae, 2022, Frontiers in Human Neuroscience)
- A common spatial pattern based corticomuscular coherence feature extraction method for movement intention decoding(Yupeng Wang, Minglun Li, Wang Kun, Minpeng Xu, Mingjie Dong, 2025, Journal of Physics: Conference Series)
- Fusion of EEG and EMG signals for detecting pre-movement intention of sitting and standing in healthy individuals and patients with spinal cord injury(Chenyang Li, Yuchen Xu, Tao Feng, Minmin Wang, Xiaomei Zhang, Li Zhang, Ruidong Cheng, Weihai Chen, Weidong Chen, Shaomin Zhang, 2025, Frontiers in Neuroscience)
- Multimodal data-based human motion intention prediction using adaptive hybrid deep learning network for movement challenged person(M. H. Abidi, 2024, Scientific Reports)
- Neural Correlation of EEG and Eye Movement in Natural Grasping Intention Estimation(Chengyu Lin, Chengjie Zhang, Jialu Xu, Renjie Liu, Yuquan Leng, Chenglong Fu, 2023, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Hand Movement Prediction Based on EEG signals by Combining MEMD and CSP(Yi Tao, Nong Yan, Gang Wang, 2020, Proceedings of the 2020 2nd International Conference on Image Processing and Machine Vision)
- Detection of pre movement event — Related desynchronization from single trial EEG signal(Karthik Soman, P. Reddy, H. Lakany, 2013, 2013 IEEE CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES)
- Muscles movement intention detection from EEG using movement related cortical potentials (MRCPs)(Muddassar Hussain, Kamran A. Bhatti, T. Zaidi, 2017, 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT))
精细运动控制与连续轨迹预测(上肢与手部)
专注于解码上肢及手部的复杂运动意图,包括多自由度抓握识别、三维空间内的连续运动轨迹追踪以及双肢协调动作解码,旨在为高精度神经假体提供控制信号。
- Hand kinematics, high-density sEMG comprising forearm and far-field potentials for motion intent recognition(Weichao Guo, Zeming Zhao, Zeyu Zhou, Yun Fang, Yang Yu, Xinjun Sheng, 2025, Scientific Data)
- State-Based Decoding of Continuous Hand Movements Using EEG Signals(Seyyed Moosa Hosseini, V. Shalchyan, 2023, IEEE Access)
- Continuous decoding of movement intention of upper limb self-initiated analytic movements from pre-movement EEG correlates(E. López-Larraz, L. Montesano, Á. Gil-Agudo, J. Minguez, 2014, Journal of NeuroEngineering and Rehabilitation)
- Deep Learning Based Recognition of Hand Movement Intention EEG in Patients with Spinal Cord Injury(Yongyu Jiang, Xiaodong Zhang, Chaoyang Chen, Zhufeng Lu, Yachun Wang, 2020, 2020 10th Institute of Electrical and Electronics Engineers International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER))
- Decoding grasp and speech signals from the cortical grasp circuit in a tetraplegic human(Sarah K. Wandelt, S. Kellis, D. Bjånes, K. Pejsa, Brian Lee, Charles Liu, Richard A. Andersen, 2021, bioRxiv)
- Towards non-invasive EEG-based arm/hand-control in users with spinal cord injury(G. Müller-Putz, P. Ofner, A. Schwarz, J. Pereira, A. Pinegger, C. Dias, Lea Hehenberger, Reinmar J. Kobler, A. Sburlea, 2017, 2017 5th International Winter Conference on Brain-Computer Interface (BCI))
- Continuous 2D trajectory decoding from attempted movement: across-session performance in able-bodied and feasibility in a spinal cord injured participant(Hannah S. Pulferer, Brynja Ásgeirsdóttir, V. Mondini, A. Sburlea, G. Müller-Putz, 2022, Journal of Neural Engineering)
- A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys(Peter J. Ifft, S. Shokur, Zheng Li, M. Lebedev, M. Nicolelis, 2013, Science Translational Medicine)
- Decoding reach and attempted grasp actions from EEG of persons with Spinal Cord Injury(Miriam Kirchhoff, S. M. A. A. Evers, Marvin Wolf, R. Rupp, A. Schwarz, 2022, 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC))
- Upper Limb Movement Execution Classification using Electroencephalography for Brain Computer Interface(S. Khan, Muhammad Majid, S. Anwar, 2023, 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC))
- Classification Of Complicated Upper Limb Movements From Pre-movement EEG Signals Using STFT And Spectral Characteristics(Ehsan Azam, A. Hassan, Muhammad Fadhirul Izwan bin Abdul Malik, I. Niazi, 2023, 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS))
- ECoG-Based Movement Classification and Limbs 3D Translation Prediction : a Deep Learning Study(Quentin Ferdinand, Rémi Souriau, Lucas Struber, H. Lorach, Philippe Ciuciu, Marina Reyboz, T. Aksenova, 2025, 2025 International Joint Conference on Neural Networks (IJCNN))
- Targeting Optimal Grasp-Related Cortical Areas for Intracortical Brain-Machine Interfaces after Spinal Cord Injury(T. Johnson, C. Foli, E. C. Conlan, K. Koenig, M. Lowe, W. Memberg, R. Kirsch, E. Herring, S. Bazarek, E. Graczyk, D. Taylor, A. Ajiboye, J. Sweet, 2025, medRxiv : the preprint server for health sciences)
闭环神经调控、VR增强与步态康复应用
探讨BCI技术在脊髓损伤、中风等临床场景下的应用。通过结合虚拟现实(VR)反馈、机器人辅助训练、以及实时触发的电刺激(FES/SCS),构建闭环系统以促进皮层重塑和功能恢复。
- Effect of Robot-Assisted Training on EEG-Derived Movement-Related Cortical Potentials for Post-Stroke Rehabilitation–A Case Series Study(Maryam Butt, G. Naghdy, F. Naghdy, Geoffrey Murray, H. Du, 2021, IEEE Access)
- Reactivating the Dormant Motor Cortex After Spinal Cord Injury With EEG Neurofeedback: A Case Study With a Chronic, Complete C4 Patient(E. López-Larraz, C. Escolano, L. Montesano, J. Minguez, 2018, Clinical EEG and Neuroscience)
- Gait Training-Based Motor Imagery and EEG Neurofeedback in Lokomat: A Clinical Intervention With Complete Spinal Cord Injury Individuals(E. R. S. Serafini, C. D. Guerrero-Méndez, T. Bastos-Filho, Anibal Cotrina-Atencio, A. F. O. de Azevedo Dantas, D. Delisle-Rodríguez, C. C. do Espírito-Santo, 2024, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Cortical modulation through robotic gait training with motor imagery brain-computer interface enhances bladder function in individuals with spinal cord injury(E. R. S. Serafini, C. D. Guerrero-Méndez, C. F. Blanco-Díaz, Fernando da Silva Fiorin, Thayse S de Albuquerque, André F O A Dantas, D. Delisle-Rodríguez, C. C. do Espírito-Santo, 2025, Scientific Reports)
- A Virtual Induction Approach for EEG Signal of Patient Movement Intention with Lower Limb Motion Assisted Robot(Runlin Dong, Xiaodong Zhang, Hanzhe Li, Xiaojun Shi, 2020, 2020 10th Institute of Electrical and Electronics Engineers International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER))
- Virtual reality mediated brain-computer interface training improves sensorimotor neuromodulation in unimpaired and post spinal cord injury individuals(M. M. N. Mannan, D. Palipana, K. Mulholland, E. Jurd, E. C. R. Lloyd, A. R. J. Quinn, C. B. Crossley, M. Rabbi, D. G. Lloyd, Yang D Teng, C. Pizzolato, 2026, Scientific Reports)
- Non-invasive spinal neuromodulation enables stepping in children with complete spinal cord injury.(Kathryn Lucas, Goutam Singh, Luis R Alvarado, Molly King, Nicole Stepp, Parth Parikh, Beatrice Ugiliweneza, Yury Gerasimenko, Andrea L. Behrman, 2025, Brain : a journal of neurology)
- Development of Lower Limb Exoskeleton Rehabilitation Robot Framework Based on Multi-Modal Motion Intent Detection(Yuefei Wang, Yudong Fang, Wujia Huang, Zhen Liu, Chenglong Zhao, Muxin Du, Liucun Zhu, 2025, IEEE Access)
- Development and evaluation of a BCI-neurofeedback system with real-time EEG detection and electrical stimulation assistance during motor attempt for neurorehabilitation of children with cerebral palsy(A. Behboodi, Julia Kline, A. Gravunder, Connor Phillips, Sheridan M. Parker, Diane L. Damiano, 2024, Frontiers in Human Neuroscience)
- Real-Time Brain-Computer Interface Control of Walking Exoskeleton with Bilateral Sensory Feedback(Jeffrey Lim, Po T. Wang, Won Joon Sohn, Derrick Lin, Shravan Thaploo, L. Bashford, D. Bjånes, A. Nguyen, Hui Gong, M. Armacost, S. Shaw, S. Kellis, Brian Lee, Darrin Lee, P. Heydari, Richard A. Andersen, Z. Nenadic, C. Liu, An H. Do, 2025, Brain stimulation)
- Brain-Computer Interface controlled Functional Electrical Stimulation: Evaluation with healthy subjects and spinal cord injury patients(L. G. Hernández-Rojas, J. Cantillo-Negrete, Omar Mendoza-Montoya, R. Carino-Escobar, Ismael Leyva-Martinez, Ana Valeria Aguirre Guemez, Aida Barrera-Ortiz, P. Carrillo-Mora, J. Antelis, 2022, IEEE Access)
- Non-invasive, Brain-controlled Functional Electrical Stimulation for Locomotion Rehabilitation in Individuals with Paraplegia(Aurelie Selfslagh, S. Shokur, Debora Campos, A. Donati, Sabrina Almeida, Seidi Y. Yamauti, D. B. Coelho, M. Bouri, M. Nicolelis, 2019, Scientific Reports)
- Peripheral Electrical Stimulation Triggered by Self-Paced Detection of Motor Intention Enhances Motor Evoked Potentials(I. Niazi, N. Mrachacz‐Kersting, N. Jiang, K. Dremstrup, Dario Farina, 2012, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- A machine-learning approach to volitional control of a closed-loop deep brain stimulation system(Brady C. Houston, Margaret C. Thompson, A. Ko, H. Chizeck, 2018, Journal of Neural Engineering)
- Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia(G. Sharma, D. Friedenberg, Nicholas V. Annetta, B. Glenn, Marcie Bockbrader, Connor Majstorovic, Stephanie Domas, W. Mysiw, A. Rezai, C. Bouton, C. Bouton, 2016, Scientific Reports)
神经生理机制建模与系统工程化优化
研究脑电、肌电及脊髓信号的生成机制、同源性特征及神经可塑性。同时关注BCI系统的实用性优化,包括电极定位、硬件评估、模型重采样校准策略以及用户学习效应。
- EEG generation mechanism of lower limb active movement intention and its virtual reality induction enhancement: a preliminary study(Runlin Dong, Xiaodong Zhang, Hanzhe Li, Gilbert Masengo, Aibin Zhu, Xiaojun Shi, Chen He, 2024, Frontiers in Neuroscience)
- Homology Characteristics of EEG and EMG for Lower Limb Voluntary Movement Intention(Xiaodong Zhang, Hanzhe Li, Zhufeng Lu, Gui Yin, 2021, Frontiers in Neurorobotics)
- Impact of Spinal Manipulation on Cortical Drive to Upper and Lower Limb Muscles(H. Haavik, I. Niazi, M. Jochumsen, D. Sherwin, S. Flavel, K. S. Türker, 2016, Brain Sciences)
- A functional model and simulation of spinal motor pools and intrafascicular recordings of motoneuron activity in peripheral nerve(M. Abdelghani, J. Abbas, K. Horch, R. Jung, 2014, Frontiers in Neuroscience)
- Functional connectivity of EEG motor rhythms after spinal cord injury(Jiancai Leng, Xin Yu, Chongfeng Wang, Jinzhao Zhao, Jianqun Zhu, Xinyi Chen, Zhaoxin Zhu, Xiuquan Jiang, Jiaqi Zhao, Chao Feng, Qingbo Yang, Jianfei Li, Lin Jiang, Fangzhou Xu, Yang Zhang, 2024, Cognitive Neurodynamics)
- EEG Headset Evaluation for Detection of Single-Trial Movement Intention for Brain-Computer Interfaces(M. Jochumsen, H. Knoche, T. Kjaer, B. Dinesen, Preben Kidmose, 2020, Sensors (Basel, Switzerland))
- Comparing Recalibration Strategies for Electroencephalography-Based Decoders of Movement Intention in Neurological Patients with Motor Disability(E. López-Larraz, J. Ibáñez, F. Trincado-Alonso, E. Monge-Pereira, José Luis Pons Rovira, L. Montesano, 2017, International journal of neural systems)
- The Effect of User Learning for Online EEG Decoding of Upper-Limb Movement Intention(Matteo Ceradini, S. Tortora, S. Micera, L. Tonin, 2025, IEEE Transactions on Medical Robotics and Bionics)
- MEMS-based High-Density Ultra-Conformal μECOG Electrode Array for Real-Time Motor Decoding(Erda Zhou, Changjiang Liu, Xiner Wang, Xiaoling Wei, Liuyang Sun, Tiger H. Tao, Zhitao Zhou, 2025, 2025 IEEE 38th International Conference on Micro Electro Mechanical Systems (MEMS))
- Comparison of Classifier Calibration Schemes for Movement Intention Detection in Individuals with Cerebral Palsy for Inducing Plasticity with Brain–Computer Interfaces(M. Jochumsen, Cecilie Sørenbye Sulkjær, Kirstine Schultz Dalgaard, 2025, Sensors (Basel, Switzerland))
- Adaptive Integrating General and Personalized Features for Enhanced Decoding of Motor Imagery EEG Signals via HyperNet-Based Module(Si-Hyun Kim, Sung-Jin Kim, Dae-Hyeok Lee, Heon Kwak, Seong-Whan Lee, 2024, 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC))
- An Adaptive Embedded Platform to Enable Real-Time Brain Motor Decoding(Joe Saad, Adrian Evans, Victor Roux-Sibillon, Ivan Miro-Panades, T. Aksenova, Lorena Anghel, 2025, 2025 21st International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT))
最终分组结果呈现了从“底层神经机制建模”到“高级AI解码算法”,再到“特定肢体功能重建”与“临床闭环康复系统”的完整技术路径。研究核心趋势表现为:1) 脑-脊髓接口(BSI)技术的突破,实现了绕过损伤部位的直接神经驱动;2) 深度学习与类脑计算(SNN)的引入极大提升了复杂意图识别的精度与实时性;3) 多模态融合与VR/电刺激的结合,使康复手段从单一训练转向主动神经重塑,显著增强了临床实用性与患者的运动功能恢复效果。
总计157篇相关文献
Introduction Rehabilitation devices assist individuals with movement disorders by supporting daily activities and facilitating effective rehabilitation training. Accurate and early motor intention detection is vital for real-time device applications. However, traditional methods of motor intention detection often rely on single-mode signals, such as EEG or EMG alone, which can be limited by low signal quality and reduced stability. This study proposes a multimodal fusion method based on EEG–EMG functional connectivity to detect sitting and standing intentions before movement execution, enabling timely intervention and reducing latency in rehabilitation devices. Methods Eight healthy subjects and five spinal cord injury (SCI) patients performed cue-based sit-to-stand and stand-to-sit transition tasks while EEG and EMG data were recorded simultaneously. We constructed EEG–EMG functional connectivity networks using data epochs from the 1.5-s period prior to movement onset. Pairwise spatial filters were then designed to extract discriminative spatial network topologies. Each filter paired with a support vector machine classifier to classify future movements into one of three classes: sit-to-stand, stand-to-sit, or rest. The final prediction was determined using a majority voting scheme. Results Among the three functional connectivity methods investigated—coherence, Pearson correlation coefficient and mutual information (MI)—the MI-based EEG–EMG network showed the highest decoding performance (94.33%), outperforming both EEG (73.89%) and EMG (89.16%). The robustness of the fusion method was further validated through a fatigue training experiment with healthy subjects. The fusion method achieved 92.87% accuracy during the post-fatigue stage, with no significant difference compared to the pre-fatigue stage (p > 0.05). Additionally, the proposed method using pre-movement windows achieved accuracy comparable to trans-movement windows (p > 0.05 for both pre- and post-fatigue stages). For the SCI patients, the fusion method showed improved accuracy, achieving 87.54% compared to single- modality methods (EEG: 83.03%, EMG: 84.13%), suggesting that the fusion method could be promising for practical rehabilitation applications. Conclusion Our results demonstrated that the proposed multimodal fusion method significantly enhances the performance of detecting human motor intentions. By enabling early detection of sitting and standing intentions, this method holds the potential to offer more accurate and timely interventions within rehabilitation systems.
In the field of medicine and rehabilitation, brain-controlled prosthetic hands can help patients with spinal cord injury to carry out daily activities. However, the recognition of EEG signals aiming at the hand movement intention of disabled patients has been faced with the difficulties of low accuracy, poor stability and weak robustness. Especially for unilateral hand movement control, the EEG intention of distinguishing different movements comes from unilateral cerebral cortex, and the degree of confusion is high. In this paper, deep learning method is introduced for EEG signal perception and recognition. A convolutional neural network model is proposed, combining with Common Spatial Patterns, to identify the intention of unilateral hand movement motion (palm extension, hand grasp). EEG signals from 15 healthy subjects and 10 patients with spinal cord injury were collected. Among the 15 healthy subjects, the average recognition accuracy was 91.57%., among which the optimal accuracy achieved 95.83%. In 10 patients with spinal cord injury., the average recognition accuracy was 78.03%, among which the optimal subjects' recognition accuracy was 82.41%. In addition, an off-line EEG recognition and control system was set up. The average accuracy of a subject was 87.92%., and the average system recognition and control time was 13.9ms. With excellent accuracy and recognition speed, this method has great value and application prospect in the fields of braincomputer interface, robotics and rehabilitation medicine for the disabled.
Brain-computer interfaces (BCIs) can translate brain signals directly into commands for external devices. Electroencephalography (EEG)-based BCIs mostly rely on the classification of discrete mental states, leading to unintuitive control. The ERC-funded project "Feel Your Reach" aimed to establish a novel framework based on continuous decoding of hand/arm movement intention, for a more natural and intuitive control. Over the years, we investigated various aspects of natural control, however, the individual components had not yet been integrated. Here, we present a first implementation of the framework in a comprehensive online study, combining (i) goal-directed movement intention, (ii) trajectory decoding, and (iii) error processing in a unique closed-loop control paradigm. Testing involved twelve able-bodied volunteers, performing attempted movements, and one spinal cord injured (SCI) participant. Similar movement-related cortical potentials and error potentials to previous studies were revealed, and the attempted movement trajectories were overall reconstructed. Source analysis confirmed the involvement of sensorimotor and posterior parietal areas for goal-directed movement intention and trajectory decoding. The increased experiment complexity and duration led to a decreased performance than each single BCI. Nevertheless, the study contributes to understanding natural motor control, providing insights for more intuitive strategies for individuals with motor impairments.
No abstract available
To develop an efficient brain-computer interface (BCI) system, electroencephalography (EEG) measures neuronal activities in different brain regions through electrodes. Many EEG-based motor imagery (MI) studies do not make full use of brain network topology. In this paper, a deep learning framework based on a modified graph convolution neural network (M-GCN) is proposed, in which temporal-frequency processing is performed on the data through modified S-transform (MST) to improve the decoding performance of original EEG signals in different types of MI recognition. MST can be matched with the spatial position relationship of the electrodes. This method fusions multiple features in the temporal-frequency-spatial domain to further improve the recognition performance. By detecting the brain function characteristics of each specific rhythm, EEG generated by imaginary movement can be effectively analyzed to obtain the subjects' intention. Finally, the EEG signals of patients with spinal cord injury (SCI) are used to establish a correlation matrix containing EEG channel information, the M-GCN is employed to decode relation features. The proposed M-GCN framework has better performance than other existing methods. The accuracy of classifying and identifying MI tasks through the M-GCN method can reach 87.456%. After 10-fold cross-validation, the average accuracy rate is 87.442%, which verifies the reliability and stability of the proposed algorithm. Furthermore, the method provides effective rehabilitation training for patients with SCI to partially restore motor function.
BackgroundBrain-machine interfaces (BMI) have recently been integrated within motor rehabilitation therapies by actively involving the central nervous system (CNS) within the exercises. For instance, the online decoding of intention of motion of a limb from pre-movement EEG correlates is being used to convert passive rehabilitation strategies into active ones mediated by robotics. As early stages of upper limb motor rehabilitation usually focus on analytic single-joint mobilizations, this paper investigates the feasibility of building BMI decoders for these specific types of movements.MethodsTwo different experiments were performed within this study. For the first one, six healthy subjects performed seven self-initiated upper-limb analytic movements, involving from proximal to distal articulations. For the second experiment, three spinal cord injury patients performed two of the previously studied movements with their healthy elbow and paralyzed wrist. In both cases EEG neural correlates such as the event-related desynchronization (ERD) and movement related cortical potentials (MRCP) were analyzed, as well as the accuracies of continuous decoders built using the pre-movement features of these correlates (i.e., the intention of motion was decoded before movement onset).ResultsThe studied movements could be decoded in both healthy subjects and patients. For healthy subjects there were significant differences in the EEG correlates and decoding accuracies, dependent on the moving joint. Percentages of correctly anticipated trials ranged from 75% to 40% (with chance level being around 20%), with better performances for proximal than for distal movements. For the movements studied for the SCI patients the accuracies were similar to the ones of the healthy subjects.ConclusionsThis paper shows how it is possible to build continuous decoders to detect movement intention from EEG correlates for seven different upper-limb analytic movements. Furthermore we report differences in accuracies among movements, which might have an impact on the design of the rehabilitation technologies that will integrate this new type of information. The applicability of the decoders was shown in a clinical population, with similar performances between healthy subjects and patients.
Motor rehabilitation based on the association of electroencephalographic (EEG) activity and proprioceptive feedback has been demonstrated as a feasible therapy for patients with paralysis. To promote long-lasting motor recovery, these interventions have to be carried out across several weeks or even months. The success of these therapies partly relies on the performance of the system decoding movement intentions, which normally has to be recalibrated to deal with the nonstationarities of the cortical activity. Minimizing the recalibration times is important to reduce the setup preparation and maximize the effective therapy time. To date, a systematic analysis of the effect of recalibration strategies in EEG-driven interfaces for motor rehabilitation has not yet been performed. Data from patients with stroke (4 patients, 8 sessions) and spinal cord injury (SCI) (4 patients, 5 sessions) undergoing two different paradigms (self-paced and cue-guided, respectively) are used to study the performance of the EEG-based classification of motor intentions. Four calibration schemes are compared, considering different combinations of training datasets from previous and/or the validated session. The results show significant differences in classifier performances in terms of the true and false positives (TPs) and (FPs). Combining training data from previous sessions with data from the validation session provides the best compromise between the amount of data needed for calibration and the classifier performance. With this scheme, the average true (false) positive rates obtained are 85.3% (17.3%) and 72.9% (30.3%) for the self-paced and the cue-guided protocols, respectively. These results suggest that the use of optimal recalibration schemes for EEG-based classifiers of motor intentions leads to enhanced performances of these technologies, while not requiring long calibration phases prior to starting the intervention.
Restoring the ability to reach and grasp can dramatically improve quality of life for people with cervical spinal cord injury (SCI). The main challenge in restoring independent reaching and grasping in patients is to develop assistive technologies with intuitive and non-invasive user interfaces. We believe that this challenge can be met by directly translating movement-related brain activity into control signals. During the last decade, we have conducted research on EEG-based brain-computer interfaces (BCIs) for the decoding of movement parameters, such as trajectories and targets. Although our findings are promising, the control is still unnatural. Therefore, we surmise that natural and intuitive control of neuroprostheses could be achieved by developing a novel control framework that incorporates detection of goal directed movement intention, movement decoding, identifying the type of grasp, error potentials detection and delivery of feedback.
Accurate decoding of hand movement intension from motor imagery (MI) EEG signal of patients with spinal cord injury (SCI) is crucial for developing effective brain-computer interfaces (BCIs) and neurorehabilitation tools. But accurate decoding of hand movement intention is still a great challenge due to inherent complexity and noise existing in EEG signals, particularly in SCI patients. In this study, we have proposed spiking neural network (SNN) approach with multiband dispersion entropy (DE) feature for classifying MI EEG signal of SCI patients to enhance the classification accuracy. An online dataset has been used which contains MI EEG signals of 10 SCI patients performing hand supination, pronation, open, palmar grasp and lateral grasp. After preprocessing the EEG signal, it was decomposed into delta, theta, alpha and beta bands. DE features were computed from each band using data of five selected channels and feature dimensionality reduction was performed by principal component analysis (PCA). SNN was then applied for binary classifications (supination vs pronation, supination vs open, supination vs palmar grasp, and supination vs lateral grasp). Our proposed approach has achieved an average binary classification accuracy of 89.62% which demonstrates the efficacy of this proposed approach. Besides, the energy efficient property of SNN can offer extended the battery life for BCI devices. Therefore, this study may contribute in the development of advanced BCI systems to improve the quality of life of SCI patients.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
Electroencephalography (EEG) based brain-computer interfaces (BCIs) offer a promising way for individuals with motor impairments to control prosthetic or rehabilitation devices. Accurately decoding movement intention (MI) is crucial for translating subjects’ motor execution plans into action. Common challenges in EEG-based BCIs include performance discrepancies, often requiring frequent recalibration of decoding algorithms. The objective of this study was enhancing BCI decoding performance of upper-limb MI identification by exploiting both machine and subjects’ learning and maintaining stable decoding algorithms. Significant performance improvements were observed across most subjects from the first to the last session of the experiment. Some subjects also demonstrated stable performance without requiring any model recalibration between sessions. All subjects achieved high efficacy in online decoding of movement intention, as reflected in improvement of the F1 score from $0.58\pm 0.26$ in the first session, to $0.84\pm 0.13$ in the final session. We emphasize the critical importance of allowing users sufficient time to improve their performance in BCIs for upper-limb MI decoding. Unlike existing studies, we specifically evaluate the effect of stable decoding strategies in online and longitudinal BCI sessions, which are key to achieving more reliable and effective BCIs.
Automated movement intention is crucial for brain–computer interface (BCI) applications. The automatic identification of movement intention can assist patients with movement problems in regaining their mobility. This study introduces a novel approach for the automatic identification of movement intention through finger tapping. This work has compiled a database of EEG signals derived from left finger taps, right finger taps, and a resting condition. Following the requisite pre-processing, the captured signals are input into the proposed model, which is constructed based on graph theory and deep convolutional networks. In this study, we introduce a novel architecture based on six deep convolutional graph layers, specifically designed to effectively capture and extract essential features from EEG signals. The proposed model demonstrates a remarkable performance, achieving an accuracy of 98% in a binary classification task when distinguishing between left and right finger tapping. Furthermore, in a more complex three-class classification scenario, which includes left finger tapping, right finger tapping, and an additional class, the model attains an accuracy of 92%. These results highlight the effectiveness of the architecture in decoding motor-related brain activity from EEG data. Furthermore, relative to recent studies, the suggested model exhibits significant resilience in noisy situations, making it suitable for online BCI applications.
The Neuro AI therapy system represents a transformative approach in biomedical engineering, merging AI, EEG-based brain-computer interfaces (BCIs), IoT, and muscle stimulation to enhance motor recovery for individuals with paralysis or motor impairments. Traditional rehabilitation, often passive and lacking real-time adaptability, is challenged by this system’s interactive framework. By interpreting EEG signals to detect movement intent—even without physical motion—the system triggers targeted muscle contractions via electrodes or vibration motors, fostering neuroplasticity and neural muscle coordination. Integrated EMG sensors and real-time feedback through an LCD screen enable continuous monitoring of both brain and muscle activity, offering users and clinicians a transparent view of therapeutic progress. AI algorithms further refine recovery by dynamically analyzing physiological data, personalizing stimulation protocols, and adjusting in response to evolving capabilities. This closed-loop, adaptive system not only accelerates motor function restoration but also provides a scalable, data-driven platform that evolves with the user. The neuroAItherapy system thus offers a next-generation, comprehensive solution for effective, long-term motor rehabilitation.
Introduction Active rehabilitation requires active neurological participation when users use rehabilitation equipment. A brain-computer interface (BCI) is a direct communication channel for detecting changes in the nervous system. Individuals with dyskinesia have unclear intentions to initiate movement due to physical or psychological factors, which is not conducive to detection. Virtual reality (VR) technology can be a potential tool to enhance the movement intention from pre-movement neural signals in clinical exercise therapy. However, its effect on electroencephalogram (EEG) signals is not yet known. Therefore, the objective of this paper is to construct a model of the EEG signal generation mechanism of lower limb active movement intention and then investigate whether VR induction could improve movement intention detection based on EEG. Methods Firstly, a neural dynamic model of lower limb active movement intention generation was established from the perspective of signal transmission and information processing. Secondly, the movement-related EEG signal was calculated based on the model, and the effect of VR induction was simulated. Movement-related cortical potential (MRCP) and event-related desynchronization (ERD) features were extracted to analyze the enhancement of movement intention. Finally, we recorded EEG signals of 12 subjects in normal and VR environments to verify the effectiveness and feasibility of the above model and VR induction enhancement of lower limb active movement intention for individuals with dyskinesia. Results Simulation and experimental results show that VR induction can effectively enhance the EEG features of subjects and improve the detectability of movement intention. Discussion The proposed model can simulate the EEG signal of lower limb active movement intention, and VR induction can enhance the early and accurate detectability of lower limb active movement intention. It lays the foundation for further robot control based on the actual needs of users.
No abstract available
Background and Objective Exoskeleton robot control should ideally be based on human voluntary movement intention. The readiness potential (RP) component of the motion-related cortical potential is observed before movement in the electroencephalogram and can be used for intention prediction. However, its single-trial features are weak and highly variable, and existing methods cannot achieve high cross-temporal and cross-subject accuracies in practical online applications. Therefore, this work aimed to combine a deep convolutional neural network (CNN) framework with a transfer learning (TL) strategy to predict the lower limb voluntary movement intention, thereby improving the accuracy while enhancing the model generalization capability; this would also provide sufficient processing time for the response of the exoskeleton robotic system and help realize robot control based on the intention of the human body. Methods The signal characteristics of the RP for lower limb movement were analyzed, and a parameter TL strategy based on CNN was proposed to predict the intention of voluntary lower limb movements. We recruited 10 subjects for offline and online experiments. Multivariate empirical-mode decomposition was used to remove the artifacts, and the moment of onset of voluntary movement was labeled using lower limb electromyography signals during network training. Results The RP features can be observed from multiple data overlays before the onset of voluntary lower limb movements, and these features have long latency periods. The offline experimental results showed that the average movement intention prediction accuracy was 95.23% ± 1.25% for the right leg and 91.21% ± 1.48% for the left leg, which showed good cross-temporal and cross-subject generalization while greatly reducing the training time. Online movement intention prediction can predict results about 483.9 ± 11.9 ms before movement onset with an average accuracy of 82.75%. Conclusion The proposed method has a higher prediction accuracy with a lower training time, has good generalization performance for cross-temporal and cross-subject aspects, and is well-prioritized in terms of the temporal responses; these features are expected to lay the foundation for further investigations on exoskeleton robot control.
Brain-computer interface (BCI) is an emerging technology which provides a road to control communication and external devices. Electroencephalogram (EEG)-based motor imagery (MI) tasks recognition has important research significance for stroke, disability and others in BCI fields. However, enhancing the classification performance for decoding MI-related EEG signals presents a significant challenge, primarily due to the variability across different subjects and the presence of irrelevant channels. To address this issue, a novel hybrid structure is developed in this study to classify the MI tasks via deep separable convolution network (DSCNN) and bidirectional long short-term memory (BLSTM). First, the collected time-series EEG signals are initially processed into a matrix grid. Subsequently, data segments formed using a sliding window strategy are inputted into proposed DSCNN model for feature extraction (FE) across various dimensions. And, the spatial-temporal features extracted are then fed into the BLSTM network, which further refines vital time-series features to identify five distinct types of MI-related tasks. Ultimately, the evaluation results of our method demonstrate that the developed model achieves a 98.09% accuracy rate on the EEGMMIDB physiological datasets over a 4-second period for MI tasks by adopting full channels, outperforming other existing studies. Besides, the results of the five evaluation indexes of Recall, Precision, Test-auc, and F1-score also achieve 97.76%, 97.98%, 98.63% and 97.86%, respectively. Moreover, a Gradient-class Activation Mapping (GRAD-CAM) visualization technique is adopted to select the vital EEG channels and reduce the irrelevant information. As a result, we also obtained a satisfying outcome of 94.52% accuracy with 36 channels selected using the Grad-CAM approach. Our study not only provides an optimal trade-off between recognition rate and number of channels with half the number of channels reduced, but also it can also advances practical application research in the field of BCI rehabilitation medicine, effectively.
Human movement intention recognition is important for human-robot interaction. Existing work based on motor imagery electroencephalogram (EEG) provides a non-invasive and portable solution for intention detection. However, the data-driven methods may suffer from the limited scale and diversity of the training datasets, which result in poor generalization performance on new test subjects. It is practically difficult to directly aggregate data from multiple datasets for training, since they often employ different channels and collected data suffers from significant domain shifts caused by different devices, experiment setup, etc. On the other hand, the inter-subject heterogeneity is also substantial due to individual differences in EEG representations. In this work, we developed two networks to learn from both the shared and the complete channels across datasets, handling inter-subject and inter-dataset heterogeneity respectively. Based on both networks, we further developed an online knowledge co-distillation framework to collaboratively learn from both networks, achieving coherent performance boosts. Experimental results have shown that our proposed method can effectively aggregate knowledge from multiple datasets, demonstrating better generalization in the context of cross-subject validation.
EEG-based brain-machine interfaces (BMIs) offer an intuitive approach for individuals with motor impairments to control prosthetic or rehabilitation devices. Decoding movement intentions plays a vital role in accurately translating the motor execution plans of subjects, such as identifying the desired grasp type or target position. In this study, EEG signals were recorded from seven healthy subjects during self-paced reaching and grasping tasks. Power Spectral Density (PSD) and Entropy were extracted as features to assess their efficacy in discrimination between different brain states related to movement planning. Classification between anticipation of movement and resting-state periods were evaluated using machine-learning methods (Linear Discriminant Analysis, Quadratic Discriminant Analysis, and Support Vector Machines). The achieved results provide strong evidence for the feasibility of decoding movement intention, laying the foundation for future applications and advancements in the field. Attaining high accuracy in decoding movement intention holds significant potential for the translational applications of BMIs in the fields of biomedical engineering and rehabilitation.
Brain Computer Interface (BCI) offers a promising approach to restoring hand functionality for people with cervical spinal cord injury (SCI). A reliable classification of brain activities based on appropriate flexibility in feature extraction could enhance BCI systems performance. In the present study, based on convolutional layers with temporal-spatial, Separable and Depthwise structures, we develop Temporal-Spatial Convolutional Residual Network)TSCR-Net(and Temporal-Spatial Convolutional Iterative Residual Network)TSCIR-Net(structures to classify electroencephalogram (EEG) signals. Using EEG signals in five different hand movement classes of SCI people, we compare the effectiveness of TSCIR-Net and TSCR-Net models with some competitive methods. We use the bayesian hyperparameter optimization algorithm to tune the hyperparameters of compact convolutional neural networks. In order to show the high generalizability of the proposed models, we compare the results of the models in different frequency ranges. Our proposed models decoded distinctive characteristics of different movement efforts and obtained higher classification accuracy than previous deep neural networks. Our findings indicate that TSCIR-Net and TSCR-Net models fulfills a better classification accuracy of 71.11%, and 64.55% for EEG_All and 57.74%, and 67.87% for EEG_Low frequency data sets than the compared methods in the literature.
Decoding the user’s natural grasp intent enhances the application of wearable robots, improving the daily lives of individuals with disabilities. Electroencephalogram (EEG) and eye movements are two natural representations when users generate grasp intent in their minds, with current studies decoding human intent by fusing EEG and eye movement signals. However, the neural correlation between these two signals remains unclear. Thus, this paper aims to explore the consistency between EEG and eye movement in natural grasping intention estimation. Specifically, six grasp intent pairs are decoded by combining feature vectors and utilizing the optimal classifier. Extensive experimental results indicate that the coupling between the EEG and eye movements intent patterns remains intact when the user generates a natural grasp intent, and concurrently, the EEG pattern is consistent with the eye movements pattern across the task pairs. Moreover, the findings reveal a solid connection between EEG and eye movements even when taking into account cortical EEG (originating from the visual cortex or motor cortex) and the presence of a suboptimal classifier. Overall, this work uncovers the coupling correlation between EEG and eye movements and provides a reference for intention estimation.
Abstract Consumer markets demonstrate an observable trend towards mass customization. Assembly processes are required to adapt in order to meet the requirements of increased product complexity and constant variant updates. A concept to meet challenges within this trend, is a close collaboration between human workers and robots. Currently, in order to protect human operators, there are barriers and restrictions in place which prevent close collaboration. This is due to safety systems being mostly reactive, rather than anticipating motions or intentions. There are probabilistic models, which aim to overcome these limitations, yet predicting human behavior remains highly complex. Thus, it would be desirable to physically measure movement intentions in advance. A novel approach is presented of how upper-limb movement intentions can be measured with a mobile electroencephalogram (EEG). The human brain constantly analyses and evaluates motor movements up to 0.5 s before their execution. A safety system could therefore be enhanced to have an early warning of an upcoming movement. In order to classify the EEG-signals as fast as possible and to minimize fine-tuning efforts, a novel data processing methodology is introduced. This includes TimeSeriesKMeans labelling of movement intentions, which is then used to train a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN). The results suggested high detection accuracies and potential time gains of up to 513 ms to be achieved in a semi-online system. Thus, the time advantages included in a simulation demonstrated the potential to increase a system's reaction time and therefore improve the safety and the fluency of Human-Robot Collaboration.
Brain–computer interfaces (BCIs) have successfully been used for stroke rehabilitation by pairing movement intentions with, e.g., functional electrical stimulation. It has also been proposed that BCI training is beneficial for people with cerebral palsy (CP). To develop BCI training for CP patients, movement intentions must be detected from single-trial EEG. The study aim was to detect movement intentions in CP patients and able-bodied participants using different classification scenarios to show the technical feasibility of BCI training in CP patients. Five CP patients and fifteen able-bodied participants performed wrist extensions and ankle dorsiflexions while EEG was recorded. All but one participant repeated the experiment on 1–2 additional days. The EEG was divided into movement intention and idle epochs that were classified with a random forest classifier using temporal, spectral, and template matching features to estimate movement intention detection performance. When calibrating the classifier on data from the same day and participant, 75% and 85% classification accuracies were obtained for CP- and able-bodied participants, respectively. The performance dropped by 5–15 percentage points when training the classifier on data from other days and other participants. In conclusion, movement intentions can be detected from single-trial EEG, indicating the technical feasibility of using BCIs for motor training in people with CP.
Decoding movement related intentions is a key step to implement BMIs. Decoding EEG has been challenging due to its low spatial resolution and signal to noise ratio. Metric learning allows finding a representation of data in a way that captures a desired notion of similarity between data points. In this study, we investigate how metric learning can help finding a representation of the data to efficiently classify EEG movement and pre-movement intentions. We evaluate the effectiveness of the obtained representation by comparing classification the performance of a Support Vector Machine (SVM) as a classifier when trained on the original representation, called Euclidean, and representations obtained with three different metric learning algorithms, including Conditional Entropy Metric Learning (CEML), Neighborhood Component Analysis (NCA), and the Entropy Gap Metric Learning (EGML) algorithms. We examine different types of features, such as time and frequency components, which input to the metric learning algorithm, and both linear and non-linear SVM are applied to compare the classification accuracies on a publicly available EEG data set for two subjects (Subject B and C). Although metric learning algorithms do not increase the classification accuracies, their interpretability using an importance measure we define here, helps understanding data organization and how much each EEG channel contributes to the classification. In addition, among the metric learning algorithms we investigated, EGML shows the most robust performance due to its ability to compensate for differences in scale and correlations among variables. Furthermore, from the observed variations of the importance maps on the scalp and the classification accuracy, selecting an appropriate feature such as clipping the frequency range has a significant effect on the outcome of metric learning and subsequent classification. In our case, reducing the range of the frequency components to 0–5 Hz shows the best interpretability in both Subject B and C and classification accuracy for Subject C. Our experiments support potential benefits of using metric learning algorithms by providing visual explanation of the data projections that explain the inter class separations, using importance. This visualizes the contribution of features that can be related to brain function.
In the field of lower limb exoskeletons, besides its electromechanical system design and control, attention has been paid to realizing the linkage of exoskeleton robots to humans via electroencephalography (EEG) and electromyography (EMG). However, even the state of the art performance of lower limb voluntary movement intention decoding still faces many obstacles. In the following work, focusing on the perspective of the inner mechanism, a homology characteristic of EEG and EMG for lower limb voluntary movement intention was conducted. A mathematical model of EEG and EMG was built based on its mechanism, which consists of a neural mass model (NMM), neuromuscular junction model, EMG generation model, decoding model, and musculoskeletal biomechanical model. The mechanism analysis and simulation results demonstrated that EEG and EMG signals were both excited by the same movement intention with a response time difference. To assess the efficiency of the proposed model, a synchronous acquisition system for EEG and EMG was constructed to analyze the homology and response time difference from EEG and EMG signals in the limb movement intention. An effective method of wavelet coherence was used to analyze the internal correlation between EEG and EMG signals in the same limb movement intention. To further prove the effectiveness of the hypothesis in this paper, six subjects were involved in the experiments. The experimental results demonstrated that there was a strong EEG-EMG coherence at 1 Hz around movement onset, and the phase of EEG was leading the EMG. Both the simulation and experimental results revealed that EEG and EMG are homologous, and the response time of the EEG signals are earlier than EMG signals during the limb movement intention. This work can provide a theoretical basis for the feasibility of EEG-based pre-perception and fusion perception of EEG and EMG in human movement detection.
The combined analysis of electroencephalogram (EEG) and electromyogram (EMG) is often used for the recognition of motor intentions in the research of brain-computer interfaces. Corticomuscular coherence (CMC) is one of the most promising features in the combined analysis of EEG and EMG. However, traditional CMC analysis usually only focuses on the correlation between the information of a single EEG channel and sEMG, ignoring the spatial information of the neuromuscular connectivity. In this study, we proposed a novel feature extraction method based on common spatial pattern(CSP) for extracting the coherence between multi-channels EEG and sEMG, and applied this method to decode a five-class motor execution task of upper limbs. As a result, the average accuracy of the proposed CSP-CMC method reached 98.7%, which was significant higher than that of other combinations of feature extraction methods. The results demonstrate that the CSP-CMC based method proposed in this study exhibits excellent performance in the recognition of motion execution tasks and is expected to promote the development of applications in neural rehabilitation.
Patients with motor impairments need caregivers' help to initiate the operation of brain-computer interfaces (BCI). This study aims to identify and characterize movement intention using multichannel electroencephalography (EEG) signals as a means to initiate BCI systems without extra accessories/methodologies. We propose to discriminate the resting and motor imagery (MI) states with high accuracy using Fourier-based synchrosqueezing transform (FSST) as a feature extractor. FSST has been investigated and compared with other popular approaches in 28 healthy subjects for a total of 6657 trials. The accuracy and f-measure values were obtained as 99.8% and 0.99, respectively, when FSST was used as the feature extractor and singular value decomposition (SVD) as the feature selection method and support vector machines as the classifier. Moreover, this study investigated the use of data that contain certain amount of noise without any preprocessing in addition to the clean counterparts. Furthermore, the statistical analysis of EEG channels with the best discrimination (of resting and MI states) characteristics demonstrated that F4-Fz-C3-Cz-C4-Pz channels and several statistical features had statistical significance levels, [Formula: see text], less than 0.05. This study showed that the preparation of the movement can be detected in real-time employing FSST-SVD combination and several channels with minimal pre-processing effort.
Traditional lower limb exoskeleton robots utilize electromechanical control panels or buttons to assist patients with physical disabilities, which is a passive training way of rehabilitation. Over the past few years, extensive research has been conducted on brain-controlled lower limb exoskeleton robot technology combined with an electroencephalogram (EEG) signals. However, the way most paradigms are designed does not conform to the natural walking posture of human beings. In this study, a new EEG-based paradigm is proposed for detecting the intention of compound-limbs movement, which is closer to human walking posture. The time-frequency analysis presents that there showed stronger event-related desynchronization (ERD) at the main channels. Besides, the brain topographical distribution shows that the ERD not only exists in the contralateral sensorimotor area, but also appears on the central parietal lobe region (the leg motion mapping region), which initially verified the possibility of differentiating this pattern. Then, after extracting time-frequency-spatial features by common spatial pattern method, three supervised machine learning algorithms are used to classify the compound limb movement. The results demonstrate that the classification performance of compound-limbs movement mode are much higher than that of single-leg movement (>20%). This research introduces a new paradigm for classifying lower-limbs related movement intention, which might help control the lower limbs exoskeleton with subjects’ voluntary intention and improve the effect of human-machine interface system.
Objectives. Parkinson patients often suffer from motor impairments such as tremor and freezing of movement that can be difficult to treat. To unfreeze movement, it has been suggested to provide sensory stimuli. To avoid constant stimulation, episodes with freezing of movement needs to be detected which is a challenge. This can potentially be obtained using a brain–computer interface (BCI) based on movement-related cortical potentials (MRCPs) that are observed in association with the intention to move. The objective in this study was to detect MRCPs from single-trial EEG. Approach. Nine Parkinson patients executed 100 wrist movements and 100 ankle movements while continuous EEG and EMG were recorded. The experiment was repeated in two sessions on separate days. Using temporal, spectral and template matching features, a random forest (RF), linear discriminant analysis, and k-nearest neighbours (kNN) classifier were constructed in offline analysis to discriminate between epochs containing movement-related or idle brain activity to provide an estimation of the performance of a BCI. Three classification scenarios were tested: 1) within-session (using training and testing data from the same session and participant), between-session (using data from the same participant from session one for training and session two for testing), and across-participant (using data from all participants except one for training and testing on the remaining participant). Main results. The within-session classification scenario was associated with the highest classification accuracies which were in the range of 88%–89% with a similar performance across sessions. The performance dropped to 69%–75% and 70%–75% for the between-session and across-participant classification scenario, respectively. The highest classification accuracies were obtained for the RF and kNN classifiers. Significance. The results indicate that it is possible to detect movement intentions in individuals with Parkinson’s disease such that they can operate a BCI which may control the delivery of sensory stimuli to unfreeze movement.
Many previous studies on brain-machine interfaces (BMIs) have focused on electroencephalography (EEG) signals elicited during motor-command execution to generate device commands. However, exploiting pre-execution brain activity related to movement intention could improve the practical applicability of BMIs. Therefore, in this study we investigated whether EEG signals occurring before movement execution could be used to classify movement intention. Six subjects performed reaching tasks that required them to move a cursor to one of four targets distributed horizontally and vertically from the center. Using independent components of EEG acquired during a premovement phase, two-class classifications were performed for left vs. right trials and top vs. bottom trials using a support vector machine. Instructions were presented visually (test) and aurally (condition). In the test condition, accuracy for a single window was about 75%, and it increased to 85% in classification using two windows. In the control condition, accuracy for a single window was about 73%, and it increased to 80% in classification using two windows. Classification results showed that a combination of two windows from different time intervals during the premovement phase improved classification performance in the both conditions compared to a single window classification. By categorizing the independent components according to spatial pattern, we found that information depending on the modality can improve classification performance. We confirmed that EEG signals occurring during movement preparation can be used to control a BMI.
Brain–computer interfaces (BCIs) can be used in neurorehabilitation; however, the literature about transferring the technology to rehabilitation clinics is limited. A key component of a BCI is the headset, for which several options are available. The aim of this study was to test four commercially available headsets’ ability to record and classify movement intentions (movement-related cortical potentials—MRCPs). Twelve healthy participants performed 100 movements, while continuous EEG was recorded from the headsets on two different days to establish the reliability of the measures: classification accuracies of single-trials, number of rejected epochs, and signal-to-noise ratio. MRCPs could be recorded with the headsets covering the motor cortex, and they obtained the best classification accuracies (73%−77%). The reliability was moderate to good for the best headset (a gel-based headset covering the motor cortex). The results demonstrate that, among the evaluated headsets, reliable recordings of MRCPs require channels located close to the motor cortex and potentially a gel-based headset.
Background Prediction of Gait intention from pre-movement Electroencephalography (EEG) signals is a vital step in developing a real-time Brain-computer Interface (BCI) for a proper neuro-rehabilitation system. In that respect, this paper investigates the feasibility of a fully predictive methodology to detect the intention to start and stop a gait cycle by utilizing EEG signals obtained before the event occurrence. Methods An eight-channel, custom-made, EEG system with electrodes placed around the sensorimotor cortex was used to acquire EEG data from six healthy subjects and two amputees. A discrete wavelet transform-based method was employed to capture event related information in alpha and beta bands in the time-frequency domain. The Hjorth parameters, namely activity, mobility, and complexity, were extracted as features while a two-sample unpaired Wilcoxon test was used to get rid of redundant features for better classification accuracy. The feature set thus obtained was then used to classify between ’walk vs. stop’ and ’rest vs. start’ classes using support vector machine (SVM) classifier with RBF kernel in a ten-fold cross-validation scheme. Results Using a fully predictive intention detection system, 76.41±4.47 % accuracy, 72.85±7.48 % sensitivity, and 79.93±5.50 % specificity were achieved for ’rest vs. start’ classification. While for ’walk vs. stop’ classification, the obtained mean accuracy, sensitivity, and specificity were 74.12±4.12 % , 70.24±6.45 % , and 77.78±7.01 % respectively. Overall average True Positive Rate achieved by this methodology was 72.06±8.27 % with 1.45 False Positives/min. Conclusion Extensive simulations and resulting classification results show that it is possible to achieve statistically similar intention detection accuracy using either only pre-movement EEG features or trans-movement EEG features. The classifier performance shows the potential of the proposed methodology to predict human movement intention exclusively from the pre-movement EEG signal to be applied in real-life prosthetic and neuro-rehabilitation systems.
For the problem that patients with lower limb motor dysfunction cannot generate strong exercise intention due to pain or other reasons during the rehabilitation exercise, which makes it difficult for precise control the lower limb motion assisted robot according to movement intention of patients. This paper analyzed the mechanism of virtual induction which can promote the generation of motion intention, established the virtual induction mechanism model, proposed a method to induce subjects to generate movement intention based on virtual reality (VR) technology and a lower extremity motion assisted exoskeleton robot system. The method established immersive virtual scenes that inspire the interest of subjects to induce and enhance their active movement intentions, produced electroencephalogram (EEG) signals with distinct characteristics. The experiment platform is established to perform experimental verification, and the EEG signals of subjects are collected as they move. The experimental results show that the method can effectively enhance active movement intention of subjects, it produced a stronger event-related desynchronization (ERD) phenomenon than without the virtual induction system, and the EEG signals generated have obvious characteristics after virtual induction. It lays the foundation for the movement intention perception of lower limb motion assisted robot and robot assistance control as subjects need.
Human-machine interfaces (HMIs) have been widely integrated with motor rehabilitation and augmentation systems. Forecasting movement transitions during human-robot interaction is crucial to ensure system safety, intuitiveness, and reactivity, particularly in anticipating human motor intentions under sudden perturbations or emergency scenarios. In this study, we investigated pre-movement neural signatures preceding sudden movement transitions during ongoing bimanual tasks. Informed by these findings, we propose a physiology-informed EEG Transformer (PI-EEGformer) for EEG-based motor intention recognition. An EEG dataset collected from a bimanual movement task, where one hand was required to switch motor states in response to unexpected cues, was used to evaluate the performance of the PI-EEGformer in comparison with seven state-of-the-art models. Results showed that, prior to the movement transition, EEG power spectrum decreased, and movement-related cortical potentials (MRCPs) could be accurately extracted from the contralateral motor cortex. PI-EEGformer reached an average accuracy of 0.912 in inter-subject tests and 0.829 in cross-subject tests in detecting movement transitions using EEG from 500 ms to 100 ms prior to the actual movement. This performance was superior to all the state-of-the-art models tested. These results demonstrate that EEG neural signatures can predict sudden movement transitions during ongoing bimanual tasks. The PI-EEGformer, designed with these physiological signatures, can enable accurate prediction of sudden movement transitions. This study will help improve the response of HMI systems to sudden disturbances, contributing to a more realistic HMI system.
We propose a neuromorphic framework to process the activity of human spinal motor neurons for movement intention recognition. This framework is integrated into a non-invasive interface that decodes the activity of motor neurons innervating intrinsic and extrinsic hand muscles. One of the main limitations of current neural interfaces is that machine learning models cannot exploit the efficiency of the spike encoding operated by the nervous system. Spiking-based pattern recognition would detect the spatio-temporal sparse activity of a neuronal pool and lead to adaptive and compact implementations, eventually running locally in embedded systems. Emergent Spiking Neural Networks (SNN) have not yet been used for processing the activity of in-vivo human neurons. Here we developed a convolutional SNN to process a total of 467 spinal motor neurons whose activity was identified in 5 participants while executing 10 hand movements. The classification accuracy approached ${0}.{95} \pm {0}.{14}$ for both isometric and non-isometric contractions. These results show for the first time the potential of highly accurate motion intent detection by combining non-invasive neural interfaces and SNN.
Surface electromyography (sEMG) signals reflect spinal motor neuron activities, and can be used as intuitive inputs for human-machine interaction (HMI) via movement intent recognition. The motor neuron potentials of far-field (wrist) and near-field (forearm) decomposed from high-density (HD) sEMG prospectively provide robust neural drives for HMI, which is a challenging research hotspot. However, there are no publicly available databases that include HD sEMG signals of forearm-wrist (FW) muscles, and hand kinematics (KIN). This paper presents the HD-FW KIN dataset that comprises HD 448-channel sEMG arrays distributed on forearm and wrist with simultaneously recording of finger joint angles and finger flexion forces. This dataset contains muscle activities of 21 subjects while performing 20 hand gestures, and 9 individual or combined finger flexion under two force levels. The usabilities of HD sEMG for hand gesture recognition, finger angle and force prediction were validated. The proposed database allows a comprehensive extraction of the neural drive from forearm and wrist, providing neural interfaces for the development of advanced prosthetic hand and wrist-worn consumer electronics.
The number of patients with knee injuries caused by strokes, spinal cord injuries, cerebral palsy or other related diseases is increasing worldwide. Robotic devices such as knee exoskeletons have been studied and adopted in gait rehabilitation as they can provide effective gait training for patients. In the process of rehabilitation training for stroke patients, the rehabilitation effect is positively affected by how much physical activity the patients take part in. Most of the signals used to measure the patients’ participation are EMG (Electromyography) signals or oxygen consumption, which increase the cost and the complexity of the robotic device. To achieve an exoskeleton that provides intelligent, effective, and comfortable assistance to the wearer, it is essential to acquire different types of motion data from the human-exoskeleton system during movement. The measured motion data can be used to identify the wearer’s movement intentions, analyze movement states and gait patterns, and evaluate motor performance. This paper proposed a framework for human intention recognition based on motion imagery with multi-mode integration. The framework was applied to the active training mode of the LLER (Lower limb exoskeleton rehabilitation robot), and it consists of two main parts: an EEG (Electroencephalography) intent signal acquisition framework based on motion imagery and an EMG-based motion command correction framework. Among them, the MI (Motor Imagery) based EEG intention signal acquisition framework relies on the passive training in the pre-rehabilitation period to generate effective EEG signals to drive the LLER robot to execute the pre-programmed trajectory training. Moreover, combined with the constant stimulation of the patient’s brain by visual instruments of HMI rehabilitation, the accuracy of motor imagery is reinforced. The EMG-based motor command correction framework involves the EMG dry electrode sensor immobilized to the muscle areas of the affected limb where activation is possible. By detecting muscle activation with the EMG sensor, the framework corrects the intentional control commands after EEG acquisition and processing. The control command of the LLER robot is valid as long as both the EEG drive command and the EMG muscle activation command are satisfied; otherwise, it is considered an invalid control command. Based on a rehabilitation robot dynamics model, a robust adaptive PD control system is developed, and the accurate signals of this multimodal fusion human intention recognition framework based on motor imagery are used as input signals to the system.
Paralysis is assumed permanent in persons with motor-complete spinal cord injury (SCI). However, spinal epidural stimulation combined with activity-based locomotor training (ABLT) and cognitive intent enabled two adults with motor-complete SCI to walk with a walker. Transcutaneous spinal stimulation (scTS), also capable of promoting a cyclic step-like pattern, might be a viable alternative in children with SCI. These findings prompted our investigation into multimodal neuromodulation training using ABLT (enhancing afferent input), spinal stimulation (scTS), and descending (intent) drive to restore voluntary stepping in children with chronic motor-complete SCI. Five non-ambulatory children (9.6 ± 2.5 years old, 3F, 4 thoracic/1 cervical injury) with chronic (>1 year, 5.2 ± 2.5 years), complete SCI underwent 60 sessions of combined ABLT and scTS training with cognitive intent to step and returned for a 3 to 6-month follow-up. During the first training session in a gravity-neutral position, all five children (5/5) made small reciprocal cycles of the hips/knees in a flexion/extension step-like pattern with stimulation, with increased excursion at session 20 for 5/5 children (right hip excursion increased from 10.1 ± 15.1 to 25.9 ± 21.3 degrees and right knee excursion increased from 9.3 ± 13.9 to 39.6 ± 29.2 degrees, p = 0.02). The children stepped overground at session 50 (P15), 60 (P34), and 20 (P32, P14, P240), voluntarily initiating and alternating left/right leg swings on the treadmill and overground with and without scTS. Three to six months post-training, all children maintained the capacity to step. The parents and children reported unanticipated improvements in sensation, bladder function, proprioception, assist to stand, transfers, and dressing. In children with chronic, motor-complete SCI, multimodal neuromodulation training can potentiate the intrinsic stepping capacity of the spinal locomotor centers to enable voluntary stepping. Remarkably, these enhancements are durable and observed even in the absence of spinal stimulation.
Motor impairments resulting from neurological disorders, such as strokes or spinal cord injuries, often impair hand and finger mobility, restricting a person’s ability to grasp and perform fine motor tasks. Brain plasticity refers to the inherent capability of the central nervous system to functionally and structurally reorganize itself in response to stimulation, which underpins rehabilitation from brain injuries or strokes. Linking voluntary cortical activity with corresponding motor execution has been identified as effective in promoting adaptive plasticity. This study introduces NeuroFlex, a motion-intent-controlled soft robotic glove for hand rehabilitation. NeuroFlex utilizes a transformer-based deep learning (DL) architecture to decode motion intent from motor imagery (MI) EEG data and translate it into control inputs for the assistive glove. The glove’s soft, lightweight, and flexible design enables users to perform rehabilitation exercises involving fist formation and grasping movements, aligning with natural hand functions for fine motor practices. The results show that the accuracy of decoding the intent of fingers making a fist from MI EEG can reach up to 85.3%, with an average AUC of 0.88. NeuroFlex demonstrates the feasibility of detecting and assisting the patient’s attempted movements using pure thinking through a non-intrusive brain–computer interface (BCI). This EEG-based soft glove aims to enhance the effectiveness and user experience of rehabilitation protocols, providing the possibility of extending therapeutic opportunities outside clinical settings.
Hand paralysis due to spinal cord injury (SCI) greatly limits the quality of life of injured individuals. Despite complete loss of hand digit control, however, residual electrical muscle activity is often detected from these injured individuals. From this activity, individual motor unit action potentials can be identified and potentially used to infer their motion intent for interfacing purposes. We recently demonstrated that residual motor units can be decoded from tetraplegic individuals with SCI, by mapping both proximal and distal forearm activity using hundreds of electromyography (EMG) electrodes. Yet, few explored the feasibility of neural interfacing using only forearm motor units or even far-field wrist motor units in SCI, which will facilitate the use of fully wearable systems such as EMG bracelets. Here, we recognize finger gestures in eight tetraplegic individuals (Seven with motor complete SCI and one with motor incomplete SCI), using either forearm or wrist motor units. We demonstrate that motion-wise surface EMG decomposition can effectively increase the number of decomposed motor units from both forearm and wrist (on average 41.25 $\pm$ 24.14 from the forearm and 30 $\pm$ 9.72 from the wrist) and to reach high accuracy in gesture recognition at both locations (82% to 100% with the forearm data, and 62% to 99 with the wrist data). The decomposition met the requirement of real-time implementation. Moreover, the correlation between far-field motor units activity recorded from the wrist with the activity recorded at the forearm is revealed, further suggesting both locations are suitable for interfacing.
Patients with severely impaired motor functions require a stable form of communication for their daily life. Restoring this ability can be achieved with spelling applications controlled by brain-computer interfaces (BCIs). To achieve intuitive control of the application, we propose a BCI system to asynchronously detect single movement intent from EEG. By emulating a button press, we develop a task-agnostic framework applicable to a wide range of interfaces. The system utilizes a model based on movement-related cortical potentials (MRCPs) to detect self-initiated movements without the need for external cues. Twenty participants utilized the developed system to control a spelling interface implemented as a row-column scanner (3-by-3 and 5-by-5 size layouts) to type five-letter words. Participants achieved an overall true positive rate (TPR) of <inline-formula> <tex-math notation="LaTeX">$54.4 \pm 27.9{\%}$ </tex-math></inline-formula> (up to 98.6% in single participants) with an average of <inline-formula> <tex-math notation="LaTeX">$2.0 \pm 1.9$ </tex-math></inline-formula> false positives per minute (FP/min). <inline-formula> <tex-math notation="LaTeX">$60.9 \pm 28.5{\%}$ </tex-math></inline-formula> of the target characters were correctly selected and participants were able to successfully spell a five-letter word in <inline-formula> <tex-math notation="LaTeX">$41.7 \pm 42.7 {\%}$ </tex-math></inline-formula> of all attempts. The analysis of the EEG showed that the MRCP-based classifier maintained consistent detection performance across interface configurations, underscoring its robustness and adaptability to changing applications. These findings demonstrate the potential of the approach as a non-invasive communication aid and establish a foundation for future development of home-use BCIs that offer intuitive, voluntary control with minimal calibration requirements.
Objective. Chronic motor impairments of arms and hands as the consequence of a cervical spinal cord injury (SCI) have a tremendous impact on activities of daily life. A considerable number of people however retain minimal voluntary motor control in the paralyzed parts of the upper limbs that are measurable by electromyography (EMG) and inertial measurement units (IMUs). An integration into human-machine interfaces (HMIs) holds promise for reliable grasp intent detection and intuitive assistive device control. Approach. We used a multimodal HMI incorporating EMG and IMU data to decode reach-and-grasp movements of groups of persons with cervical SCI (n = 4) and without (control, n = 13). A post-hoc evaluation of control group data aimed to identify optimal parameters for online, co-adaptive closed-loop HMI sessions with persons with cervical SCI. We compared the performance of real-time, Random Forest-based movement versus rest (2 classes) and grasp type predictors (3 classes) with respect to their co-adaptation and evaluated the underlying feature importance maps. Main results. Our multimodal approach enabled grasp decoding significantly better than EMG or IMU data alone (p < 0.05). We found the 0.25 s directly prior to the first touch of an object to hold the most discriminative information. Our HMIs correctly predicted 79.3 ± STD 7.4 (102.7 ± STD 2.3 control group) out of 105 trials with grand average movement vs. rest prediction accuracies above 99.64% (100% sensitivity) and grasp prediction accuracies of 75.39 ± STD 13.77% (97.66 ± STD 5.48% control group). Co-adaption led to higher prediction accuracies with time, and we could identify adaptions in feature importances unique to each participant with cervical SCI. Significance. Our findings foster the development of multimodal and adaptive HMIs to allow persons with cervical SCI the intuitive control of assistive devices to improve personal independence.
Restoring motor function in individuals with spinal cord injuries (SCIs), strokes, or amputations is a crucial challenge. Recent studies show that spared motor neurons can still be voluntarily controlled using surface electromyography (EMG), even without visible movement. To harness these signals, we developed a wireless, high-density EMG bracelet and a software framework, MyoGestic. Our system enables rapid adaptation of machine learning models to users’ needs, allowing real-time decoding of spared motor dimensions. In our study, we successfully decoded motor intent from two participants with traumatic SCI, two with spinal stroke, and three with amputations in real time, achieving multiple controllable motor dimensions within minutes. The decoded neural signals could control a digitally rendered hand, an orthosis, a prosthesis, or a two-dimensional cursor. MyoGestic’s participant-centered approach allows a collaborative and iterative development of myocontrol algorithms, bridging the gap between researcher and participant, to advance intuitive EMG interfaces for neural lesions.
Decoding motor intent from recorded neural signals is essential for the development of effective neural-controlled prostheses. To facilitate the development of online decoding algorithms we have developed a software platform to simulate neural motor signals recorded with peripheral nerve electrodes, such as longitudinal intrafascicular electrodes (LIFEs). The simulator uses stored motor intent signals to drive a pool of simulated motoneurons with various spike shapes, recruitment characteristics, and firing frequencies. Each electrode records a weighted sum of a subset of simulated motoneuron activity patterns. As designed, the simulator facilitates development of a suite of test scenarios that would not be possible with actual data sets because, unlike with actual recordings, in the simulator the individual contributions to the simulated composite recordings are known and can be methodically varied across a set of simulation runs. In this manner, the simulation tool is suitable for iterative development of real-time decoding algorithms prior to definitive evaluation in amputee subjects with implanted electrodes. The simulation tool was used to produce data sets that demonstrate its ability to capture some features of neural recordings that pose challenges for decoding algorithms.
The EEG based BCI systems have applications in aiding the recovery of hand motor functions through decoding motor imagery (MI) and motor execution (ME) signals. Other areas under consideration include stroke and spinal cord injury (SCI) which fall under the category of neuromotor disorders. Incorporating Brain-Computer Interface technology has proven helpful for motor rehabilitation. The study brough about new possibilities with this research which focused on exploring movement related cortical potentials and event related desynchronization over the $\text{mu}(8-13 ~\text{Hz})$ and beta $(14-30 ~\text{Hz})$ bands. Adopting techniques to classify hand movements, we implement machine learning methods including Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), K-Nearest Neighbours (KNN), and Random Forests (RF) on the gathered EEG signals. The design of the experiment makes use of EEG datasets which are publicly accessible. These datasets undergo preprocessing in the form of analyzing independent components (ICA), band pass filtering with frequency limits of $0.3-50 ~\text{Hz}$, and other feature extraction methods aimed at improving accuracy for classification to optimize the results. The intent behind the study has been to enhance the capability of real-time EEG signal interpretation for BCI operated robotic hands, thus contributing to the development of individualized systems for neurorehabilitation.
In the realm of motor rehabilitation, Brain-Computer Interface Neurofeedback Training (BCI-NFT) emerges as a promising strategy. This aims to utilize an individual’s brain activity to stimulate or assist movement, thereby strengthening sensorimotor pathways and promoting motor recovery. Employing various methodologies, BCI-NFT has been shown to be effective for enhancing motor function primarily of the upper limb in stroke, with very few studies reported in cerebral palsy (CP). Our main objective was to develop an electroencephalography (EEG)-based BCI-NFT system, employing an associative learning paradigm, to improve selective control of ankle dorsiflexion in CP and potentially other neurological populations. First, in a cohort of eight healthy volunteers, we successfully implemented a BCI-NFT system based on detection of slow movement-related cortical potentials (MRCP) from EEG generated by attempted dorsiflexion to simultaneously activate Neuromuscular Electrical Stimulation which assisted movement and served to enhance sensory feedback to the sensorimotor cortex. Participants also viewed a computer display that provided real-time visual feedback of ankle range of motion with an individualized target region displayed to encourage maximal effort. After evaluating several potential strategies, we employed a Long short-term memory (LSTM) neural network, a deep learning algorithm, to detect the motor intent prior to movement onset. We then evaluated the system in a 10-session ankle dorsiflexion training protocol on a child with CP. By employing transfer learning across sessions, we could significantly reduce the number of calibration trials from 50 to 20 without compromising detection accuracy, which was 80.8% on average. The participant was able to complete the required calibration trials and the 100 training trials per session for all 10 sessions and post-training demonstrated increased ankle dorsiflexion velocity, walking speed and step length. Based on exceptional system performance, feasibility and preliminary effectiveness in a child with CP, we are now pursuing a clinical trial in a larger cohort of children with CP.
BackgroundLower motor neurons in the spinal cord lose supraspinal inputs after complete spinal cord injury, leading to a loss of volitional control below the injury site. Extensive locomotor training with spinal cord stimulation can restore locomotion function after spinal cord injury in humans and animals. However, this locomotion is non-voluntary, meaning that subjects cannot control stimulation via their natural “intent”. A recent study demonstrated an advanced system that triggers a stimulator using forelimb stepping electromyographic patterns to restore quadrupedal walking in rats with spinal cord transection. However, this indirect source of “intent” may mean that other non-stepping forelimb activities may false-trigger the spinal stimulator and thus produce unwanted hindlimb movements.MethodsWe hypothesized that there are distinguishable neural activities in the primary motor cortex during treadmill walking, even after low-thoracic spinal transection in adult guinea pigs. We developed an electronic spinal bridge, called “Motolink”, which detects these neural patterns and triggers a “spinal” stimulator for hindlimb movement. This hardware can be head-mounted or carried in a backpack. Neural data were processed in real-time and transmitted to a computer for analysis by an embedded processor. Off-line neural spike analysis was conducted to calculate and preset the spike threshold for “Motolink” hardware.ResultsWe identified correlated activities of primary motor cortex neurons during treadmill walking of guinea pigs with spinal cord transection. These neural activities were used to predict the kinematic states of the animals. The appropriate selection of spike threshold value enabled the “Motolink” system to detect the neural “intent” of walking, which triggered electrical stimulation of the spinal cord and induced stepping-like hindlimb movements.ConclusionWe present a direct cortical “intent”-driven electronic spinal bridge to restore hindlimb locomotion after complete spinal cord injury.
Abstract Loss of hand function after cervical spinal cord injury severely impairs functional independence. We describe a method for restoring volitional control of hand grasp in one 21-year-old male subject with complete cervical quadriplegia (C5 American Spinal Injury Association Impairment Scale A) using a portable fully implanted brain–computer interface within the home environment. The brain–computer interface consists of subdural surface electrodes placed over the dominant-hand motor cortex and connects to a transmitter implanted subcutaneously below the clavicle, which allows continuous reading of the electrocorticographic activity. Movement-intent was used to trigger functional electrical stimulation of the dominant hand during an initial 29-weeks laboratory study and subsequently via a mechanical hand orthosis during in-home use. Movement-intent information could be decoded consistently throughout the 29-weeks in-laboratory study with a mean accuracy of 89.0% (range 78–93.3%). Improvements were observed in both the speed and accuracy of various upper extremity tasks, including lifting small objects and transferring objects to specific targets. At-home decoding accuracy during open-loop trials reached an accuracy of 91.3% (range 80–98.95%) and an accuracy of 88.3% (range 77.6–95.5%) during closed-loop trials. Importantly, the temporal stability of both the functional outcomes and decoder metrics were not explored in this study. A fully implanted brain–computer interface can be safely used to reliably decode movement-intent from motor cortex, allowing for accurate volitional control of hand grasp.
This study investigates whether spinal manipulation leads to changes in motor control by measuring the recruitment pattern of motor units in both an upper and lower limb muscle and to see whether such changes may at least in part occur at the cortical level by recording movement related cortical potential (MRCP) amplitudes. In experiment one, transcranial magnetic stimulation input–output (TMS I/O) curves for an upper limb muscle (abductor pollicus brevis; APB) were recorded, along with F waves before and after either spinal manipulation or a control intervention for the same subjects on two different days. During two separate days, lower limb TMS I/O curves and MRCPs were recorded from tibialis anterior muscle (TA) pre and post spinal manipulation. Dependent measures were compared with repeated measures analysis of variance, with p set at 0.05. Spinal manipulation resulted in a 54.5% ± 93.1% increase in maximum motor evoked potential (MEPmax) for APB and a 44.6% ± 69.6% increase in MEPmax for TA. For the MRCP data following spinal manipulation there were significant difference for amplitude of early bereitschafts-potential (EBP), late bereitschafts potential (LBP) and also for peak negativity (PN). The results of this study show that spinal manipulation leads to changes in cortical excitability, as measured by significantly larger MEPmax for TMS induced input–output curves for both an upper and lower limb muscle, and with larger amplitudes of MRCP component post manipulation. No changes in spinal measures (i.e., F wave amplitudes or persistence) were observed, and no changes were shown following the control condition. These results are consistent with previous findings that have suggested increases in strength following spinal manipulation were due to descending cortical drive and could not be explained by changes at the level of the spinal cord. Spinal manipulation may therefore be indicated for the patients who have lost tonus of their muscle and/or are recovering from muscle degrading dysfunctions such as stroke or orthopaedic operations and/or may also be of interest to sports performers. These findings should be followed up in the relevant populations.
Neural or muscular injuries, such as due to amputation, spinal cord injury, and stroke, can affect hand functions, profoundly impacting independent living. This has motivated the advancement of cutting-edge assistive robotic hands. However, unintuitive myoelectric control of these devices remains challenging, which limits the clinical translation of these devices. Accordingly, we developed a robust motor-intent decoding approach to continuously predict the intended fingertip forces of single and multiple fingers in real time. We used population motor neuron discharge activities (i.e., neural drive from brain to spinal cord) decoded from a high-density surface electromyogram (HD-sEMG) signals as the control signals instead of the conventional global sEMG features. To enable real-time neural-drive prediction, we employed a convolutional neural network model to establish the mapping from global HD-sEMG features to finger-specific neural-drive signals, which were then employed for continuous and real-time control of three prosthetic fingers (index, middle, and ring). As a result, the neural-drive-based approach can decode the motor intent of single-finger and multifinger forces with significantly lower force estimation errors than that obtained using the global HD-sEMG-amplitude approach. Besides, the force prediction accuracy was consistent over time and demonstrated strong robustness to signal interference. Our network-based decoder can also achieve better finger isolation with minimal forces predicted in unintended fingers. Our work demonstrates that the accurate and robust finger force control could be achieved through this new decoding approach. The outcomes offer an efficient intent prediction approach that allows users to have intuitive control of prosthetic fingertip forces in a dexterous way.
PURPOSE Brain-computer interfaces (BCIs) offer a pathway to restore ambulation in indi-viduals with spinal cord injury (SCI). However, existing BCI systems for gait are unidirectional and lack sensory feedback. This study aimed to demonstrate that a bidirectional brain-computer interface (BDBCI) can simultaneously enable real-time brain-controlled walking and artificial leg sensation via electrical stimulation of the sensory cortex. METHODS Epilepsy patients undergoing bilateral interhemispheric subdural electrocorticog-raphy (ECoG) implantation were recruited for this proof-of-concept study. Motor mapping identified electrodes in the leg motor cortex for decoding stepping intent, while sensory stimu-lation mapping determined stimulation sites in the somatosensory cortex to elicit artificial leg percepts. A custom embedded BDBCI decoded motor intent in real time to actuate a robotic gait exoskeleton (RGE) from ECoG signals and delivered leg swing sensory feedback via direct cortical stimulation. Performance was assessed through correlations between cued and decoded states, sensory reliability tasks, and control experiments. RESULTS One subject was recruited and achieved a high decoding performance (ρ = 0.92 ± 0.04, lag of 3.5 ± 0.5 s) across 10 runs of operating the BDBCI-controlled RGE. Bilateral leg percepts were validated through a blind step-counting task (92.8% accuracy, p < 10-6). Control experiments verified that decoding was not affected by stimulation artifacts. No adverse events were reported. DISCUSSION This study establishes the feasibility of an embedded system BDBCI for restor-ing both motor control and artificial sensation of walking. Leveraging interhemispheric leg sen-sorimotor cortices is safe and yields superior decoding compared to prior lateral brain convexity approaches. These findings provide a foundation for translating BDBCI technology into fully implantable systems for SCI patients with paraplegia.
Brain-computer interfaces (BCIs) could enable persons with cervical spinal cord injury (SCI) to intuitively control assistive motor devices for regaining lost grasping function. Previous studies, mostly performed in non-disabled persons, have already shown that complex upper limb movements can be decoded from the low-frequency time domain of the electroencephalogram (EEG).In this work, we attempted to translate these results to persons with cervical SCI and investigated whether executed reach and attempted grasp actions could be decoded from their EEG signals. For this, we chose three different reach-and-grasp actions, two unimanual and one bimanual, towards objects of daily life. During participants’ self-initiated, executed reach and attempted grasp actions, we recorded EEG using mobile, water-based electrodes. We measured two participants with subacute cervical SCI who had preserved shoulder movements and elbow flexion but no wrist and hand functions. Both repeated the session three times. We also recorded the EEG of 10 non-disabled persons performing the same tasks (control group). We extracted and analyzed movement-related cortical potentials (MRCPs) from the EEG’s low-frequency time domain. Consecutively, we assessed the decoding capabilities of two linear (shrinkage based linear discriminant analysis (sLDA), linear support vector machine (SVM)) and two non-linear (Random Forests (RF), naive Bayes (NBC)) classification models for the discrimination of the grasp actions.We could show that sLDA, SVM, and Random Forest yielded comparable classification results on average with 63.4% ± SD 9% for participants with SCI and 69.7% ± SD 9% for the control group (chance level 29.3%). Our results indicate that it is feasible to decode executed reach and attempted grasp actions from MRCPs of persons with subacute cervical SCI. Future measurements will provide additional data to assess the generalizability of our results in a larger group of people with cervical SCI.
No abstract available
Brain-Computer Interfaces (BCIs) aim at bridging residual neural activity related to motor intents with motor commands, presenting a revolutionary tool for helping tetraplegic and paraplegic individuals to regain motor control. Electrocorticography (ECoG)-based BCIs has emerged as a good compromise between invasiveness of the recording device and quality the recorded signals, making them a promising modality for BCI applications. The WIMAGINE system developed by CEA Clinatec for ECoG recording have been implanted for several years in proof of concept clinical trials on individuals with chronic impairments after severe spinal cord injury. Nevertheless, adequate algorithms are crucial to decipher brain signals ideally in real-time on portable systems so that these solutions can be proposed to patients in their daily life. While NPLS-based ECoG decoders have been successfully trained online in closed-loop clinical trials, deep learning alternatives have primarily been evaluated in offline settings. In this study, we investigate the offline training of deep learning models on datasets that replicate the temporal dynamics of real-time data acquisition. Our approach accounts for key factors such as inter-subject variability and signal drift, ensuring a more realistic evaluation. We systematically evaluate the performance of multiple models for classifying arm, leg, and wrist movements and predicting 3D translations of the arms and wrists, using varying amounts of ECoG data recorded from tetraplegic and paraplegic patients during motor imagery tasks. Additionally, we introduce a novel deep learning model based on the transformer architecture, specifically designed to be adjustable to scenarios with low data amounts. If on the considered classification datasets the NPLS-based ECoG decoder achieved better performances, on the considered regression datasets with the lowest data amounts, our model achieved performance comparable to, or exceeding, that of an NPLS decoder, while using less than half the parameters.
Assistive neuro-inspired rehabilitation devices are essential for people who have suffered a spinal cord injury (SCI), stroke, or limb amputation in their activities of daily living. Neuro-inspired rehabilitation devices typically use a single-modal biosignal with a conventional machine learning algorithm on an embedded edge device for gesture classification. Although deep learning decoders provide high-accuracy gesture classification, the mismatch in the computational complexity and resource availability of edge devices has limited the deployment of real-time gesture inference on embedded devices. In this study, we describe an event-driven, edge-compatible deep neural network (DNN) capable of classifying gestures from a single or hybrid biosignal detected at the edge. The DNN-based decoders were deployed on a field-programmable gate array (FPGA) to classify motor intent acquired from the biosensors for intuitive control of a 3-D-printed upper limb rehabilitation device. The study was validated with 33 subjects offline and on-device. Offline average classification accuracy of 93.14% for single-modal electromyography (EMG), EMG-Net, 50.42% for single-modal electroencephalography (EEG), EEG-Net, and 93.35% for hybrid-modal biosignal (Hybrid-Net) using the 8-bit fixed-point quantization-aware method was obtained, while the real-time inference on the FPGA resulted in 94.97%, 58.27%, and 92.73%, respectively. The EMG biosensor shifted 5 cm to examine model degradation yielded 11.5% and 2.64% accuracy loss for the on-device EMG-Net and Hybrid-Net, respectively. The event-driven algorithm implemented performed with a reliability of 1, ensuring inference with voluntary gesture grasp. The study reports that hybrid biosignals outperformed single-modal EEG in gesture classification offline and on-device and single-modal EMG in case of EMG electrode shift. In addition, this article demonstrates an end-to-end approach that deploys a DNN decoder to an edge device for neuro-inspired control of the dexterous hand devoid of an Internet-of-Things (IoT) connection. The data and code are available at the following repository: https://github.com/HumanMachineInterface/Gest-Infer.
Motor neurons in the brain and spinal cord convey information about motor intent that can be extracted and interpreted to control assistive devices, such as computers, wheelchairs, and robotic manipulators. However, most methods for measuring the firing activity of single neurons rely on implanted microelectrodes. Although intracortical brain-computer interfaces (BCIs) have been shown to be safe and effective, the requirement for surgery poses a barrier to widespread use. Here, we demonstrate that a wearable sensor array can detect residual motor unit activity in paralyzed muscles after severe cervical spinal cord injury (SCI). Despite generating no observable hand movement, volitional recruitment of motor neurons below the level of injury was observed across attempted movements of individual fingers and overt wrist and elbow movements. Subgroups of motor units were coactive during flexion or extension phases of the task. Single digit movement intentions were classified offline from the EMG power (RMS) or motor unit firing rates with median classification accuracies >75% in both cases. Simulated online control of a virtual hand was performed with a binary classifier to test feasibility of real time extraction and decoding of motor units. The online decomposition algorithm extracted motor units in 1.2 ms, and the firing rates predicted the correct digit motion 88 {+/-} 24% of the time. This study provides the first demonstration of a wearable interface for recording and decoding firing rates of motor neurons below the level of injury in a person with tetraplegia after motor complete SCI.
Spinal cord injury (SCI) can disrupt the communication pathways between the brain and the rest of the body, restricting the ability to perform volitional movements. Neuroprostheses or robotic arms can enable individuals with SCI to move independently, improving their quality of life. The control of restorative or assistive devices is facilitated by brain-computer interfaces (BCIs), which convert brain activity into control commands. In this paper, we summarize the recent findings of our research towards the main aim to provide reliable and intuitive control. We propose a framework that encompasses the detection of goal-directed movement intention, movement classification and decoding, error-related potentials detection and delivery of kinesthetic feedback. Finally, we discuss future directions that could be promising to translate the proposed framework to individuals with SCI.
Transcutaneous electrical spinal cord stimulation (tSCS) is a non-invasive neuromodulatory technique that has in recent years been linked to improved volitional limb control in spinal-cord injured individuals. Although the technique is growing in popularity there is still uncertainty regarding the neural mechanisms underpinning sensory and motor recovery. Brain monitoring techniques such as electroencephalography (EEG) may provide further insights to the changes in coritcospinal excitability that have already been demonstrated using other techniques. It is unknown, however, whether intelligible EEG can be extracted while tSCS is being applied, owing to substantial high-amplitude artifacts associated with stimulation-based therapies. Here, for the first time, we characterise the artifacts that manifest in EEG when recorded simultaneously with tSCS. We recorded multi-channel EEG from 21 healthy volunteers as they took part in a resting state and movement task across two sessions: One with tSCS delivered to the cervical region of the neck, and one without tSCS. An offline analysis in the time and frequency domain showed that tSCS manifested as narrow, high-amplitude peaks with a spectral density contained at the stimulation frequency. We quantified the altered signals with descriptive statistics—kurtosis, root-mean-square, complexity, and zero crossings—and applied artifact-suppression techniques—superposition of moving averages, adaptive, median, and notch filtering—to explore whether the effects of tSCS could be suppressed. We found that the superposition of moving averages filter was the most successful technique at returning contaminated EEG to levels statistically similar to that of normal EEG. In the frequency domain, however, notch filtering was more effective at reducing the spectral power contribution of stimulation from frontal and central electrodes. An adaptive filter was more appropriate for channels closer to the stimulation site. Lastly, we found that tSCS posed no detriment the binary classification of upper-limb movements from sensorimotor rhythms, and that adaptive filtering resulted in poorer classification performance. Overall, we showed that, depending on the analysis, EEG monitoring during transcutaneous electrical spinal cord stimulation is feasible. This study supports future investigations using EEG to study the activity of the sensorimotor cortex during tSCS, and potentially paves the way to brain–computer interfaces operating in the presence of spinal stimulation.
Brain-computer interfaces access the volitional command signals from various brain areas in order to substitute for the motor functions lost due to spinal cord injury or disease. As the final common pathway of the central nervous system (CNS) outputs, the descending tracts of the spinal cord offer an alternative site to extract movement-related command signals. Using flexible 2D microelectrode arrays, we have recorded the corticospinal tract (CST) signals in rats during a reach-to-pull task. The CST activity was then classified by the forelimb movement phases into two or three classes in a training dataset and cross validated in a test set. The average classification accuracies were 80 ± 10% (min: 62% to max: 97% ) and 55 ± 8% (min: 43% to max: 71%) for two-class and three-class cases, respectively. The forelimb flexor and extensor EMG envelopes were also predicted from the CST signals using linear regression. The average correlation coefficient between the actual and predicted EMG signals was 0.5 ± 0.13 (n = 124) , whereas the highest correlation was 0.81 for the biceps EMG. Although the forelimb motor function cannot be explained completely by the CST activity alone, the success rates obtained in reconstructing the EMG signals support the feasibility of a spinal-cord-computer interface as a concept.
BackgroundSpinal cord injury (SCI) results in the partial or complete loss of movement and sensation below the level of injury. In individuals with cervical level SCI, there is a great need for voluntary command generation for environmental control, self-mobility, or computer access to improve their independence and quality of life. Brain-computer interfacing is one way of generating these voluntary command signals. As an alternative, this study investigates the feasibility of utilizing descending signals in the dorsolateral spinal cord tracts above the point of injury as a means of generating volitional motor control signals.MethodsIn this work, adult male rats were implanted with a 15-channel microelectrode array (MEA) in the dorsolateral funiculus of the cervical spinal cord to record multi-unit activity from the descending pathways while the animals performed a reach-to-grasp task. Mean signal amplitudes and signal-to-noise ratios during the behavior was monitored and quantified for recording periods up to 3 months post-implant. One-way analysis of variance (ANOVA) and Tukey’s post-hoc analysis was used to investigate signal amplitude stability during the study period. Multiple linear regression was employed to reconstruct the forelimb kinematics, i.e. the hand position, elbow angle, and hand velocity from the spinal cord signals.ResultsThe percentage of electrodes with stable signal amplitudes (p-value < 0.05) were 50% in R1, 100% in R2, 72% in R3, and 85% in R4. Forelimb kinematics was reconstructed with correlations of R2 > 0.7 using tap-delayed principal components of the spinal cord signals.ConclusionsThis study demonstrated that chronic recordings up to 3-months can be made from the descending tracts of the rat spinal cord with relatively small changes in signal characteristics over time and that the forelimb kinematics can be reconstructed with the recorded signals. Multi-unit recording technique may prove to be a viable alternative to single neuron recording methods for reading the information encoded by neuronal populations in the spinal cord.
Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.
No abstract available
No abstract available
Motor execution induces significant alterations in the dynamics of electroencephalography (EEG) signals, which are crucial for assessing rehabilitation, brain plasticity, and brain-computer interface (BCI) applications. While traditional analyses have primarily focused on power spectral changes, recent advancements incorporate non-linear indices to uncover previously undetected characteristics of brain dynamics.Network analysis provides a powerful framework to examine the structural organization and communication within complex systems composed of interconnected neural units. This study investigates the structural properties functional networks formed during both active and resting states under different knee joint flexion tasks. These movements were performed under three physical demand conditions, including an assisted, non-volitional movement.Functional networks were constructed from EEG analysis over 16 electrodes for the μ, β, and γ frequency bands, and key network metrics were estimated, including input and output node degree centrality, clustering coefficient, and betweenness centrality. Results indicate that motor execution leads to a reduction in overall network connectivity while enhancing communication efficiency. Additionally, networks in the γ and μ bands were more involved in voluntary movement, whereas the β band played a predominant role in assisted movements. The spatial distribution of electrodes contributing to these networks differed between voluntary and assisted conditions, suggesting distinct underlying neural mechanisms rather than a simple linear modulation of connectivity.
No abstract available
Introduction: This study aimed at investigating the stimulation by intra-spinal signals decoded from electrocorticography (ECoG) assessments to restore the movements of the leg in an animal model of spinal cord injury (SCI). Methods: The present work is comprised of three steps. First, ECoG signals and the associated leg joint changes (hip, knee, and ankle) in sedated healthy rabbits were recorded in different trials. Second, an appropriate set of intra-spinal electric stimuli was discovered to restore natural leg movements, using the three leg joint movements under a fuzzy-controlled strategy in spinally-injured rabbits under anesthesia. Third, a nonlinear autoregressive exogenous (NARX) neural network model was developed to produce appropriate intra-spinal stimulation developed from decoded ECoG information. The model was able to correlate the ECoG signal data to the intra-spinal stimulation data and finally, induced desired leg movements. In this study, leg movements were also developed from offline ECoG signals (deciphered from rabbits that were not injured) as well as online ECoG data (extracted from the same rabbit after SCI induction). Results: Based on our data, the correlation coefficient was 0.74±0.15 and the normalized root means square error of the brain-spine interface was 0.22±0.10. Conclusion: Overall, we found that using NARX, appropriate information from ECoG recordings can be extracted and used for the generation of proper intra-spinal electric stimulations for restoration of natural leg movements lost due to SCI.
Motor rehabilitation is a therapeutic process to facilitate functional recovery in people with spinal cord injury (SCI). However, its efficacy is limited to areas with remaining sensorimotor function. Spinal cord stimulation (SCS) creates a temporary prosthetic effect that may allow further rehabilitation-induced recovery in individuals without remaining sensorimotor function, thereby extending the therapeutic reach of motor rehabilitation to individuals with more severe injuries. In this work, we report our first steps in developing a non-invasive brain-spine interface (BSI) based on electroencephalography (EEG) and transcutaneous spinal cord stimulation (tSCS). The objective of this study was to identify EEG-based neural correlates of lower limb movement in the sensorimotor cortex of unimpaired individuals (N = 17) and to quantify the performance of a linear discriminant analysis (LDA) decoder in detecting movement onset from these neural correlates. Our results show that initiation of knee extension was associated with event-related desynchronization in the central-medial cortical regions at frequency bands between 4 and 44 Hz. Our neural decoder using µ (8–12 Hz), low β (16–20 Hz), and high β (24–28 Hz) frequency bands achieved an average area under the curve (AUC) of 0.83 ± 0.06 s.d. (n = 7) during a cued movement task offline. Generalization to imagery and uncued movement tasks served as positive controls to verify robustness against movement artifacts and cue-related confounds, respectively. With the addition of real-time decoder-modulated tSCS, the neural decoder performed with an average AUC of 0.81 ± 0.05 s.d. (n = 9) on cued movement and 0.68 ± 0.12 s.d. (n = 9) on uncued movement. Our results suggest that the decrease in decoder performance in uncued movement may be due to differences in underlying cortical strategies between conditions. Furthermore, we explore alternative applications of the BSI system by testing neural decoders trained on uncued movement and imagery tasks. By developing a non-invasive BSI, tSCS can be timed to be delivered only during voluntary effort, which may have implications for improving rehabilitation.
This article explains “Applied Medi-Brain Energy-Tronic Treatment Method” for the Medical Treatments of SMA – Spinal Muscular Atrophy Disease, Paralyzed Patients, ALS patients, MPS, SSPE, DMD Patients with the Biomechanical Analysis of Bionic Prosthetic Robotic Artificial Hand Design.For many people, artificial limbs are devices that replace a lost organ or limb. The purpose of artificial limbs is to replace the lost limb and perform functions in daily life and increase the individual's quality of life. These limbs use advanced mechanisms, sensors and motors to mimic natural limbs. While traditional prosthetics often offer limited flexibility and functionality, artificial limbs are becoming more personalized, functional and aesthetically better thanks to 3D printers.The design to be discussed in this project will be an artificial hand prototype produced with PLA filament using 3D printers. The artificial hand design will aim to fulfill basic functions such as independent movement of the fingers, holding and grasping in a way that will comply with the biomechanical structure of the human hand.This “Applied Medi-Brain Energy-Tronic Treatment Method” is completely original and unique to the corresponding author of this article, Emin Taner ELMAS and then the system is integrated with the “Applied Medi-Brain Energy-Tronic Treatment Method”. This “Applied Medi-Brain Energy-Tronic Treatment Method” is not a treatment that has been applied so far, it was invented, first thought and designed by the author of this article, Emin Taner ELMAS, and can be put into practice with step-by-step development stages. The project contains the theory of a method tried to be developed that can treat SMA (Spinal Muscular Atrophy)disease and other similar neurological diseases. In the study, brain data will be examined with a 14-channel EEG Electroencephalography device. With this device, the signals in the brain will be examined and these signals will be transmitted to the patients’ muscles. Many physical and sensory functions cannot be performed in SMA patients. Coughing, swallowing, breathing, chewing, walking, hand, arm, leg and other muscle movements cannot occur. With this EEG device, the signals in the brain will be able to be seen as waves. By means of the special software of EEG device it is possible to manipulate the cube on the computer screen just by brain thinking and it is possible to simulate facial movements and facial expressions on the computer screen, as well.
Real-time brain-computer interfaces (BCIs) that decode electroencephalograms (EEG) during motor imagery (MI) are powerful adjuncts to rehabilitation after neurotrauma. Further, immersive virtual reality (VR) could complement BCIs by delivering visual and auditory sensory feedback (VR biofeedback) congruent to user’s MI, enabling task-oriented therapies. Yet, therapeutic outcomes rely on user’s proficiency in evoking MI to attain volitional BCI-commanded VR interaction. While previous studies have explored multi-session BCIs, we investigated the impact of longitudinal training on sensorimotor neuromodulation using BCI combined with VR-mediated externally-cued and self-paced lower-limb MI tasks. The EEG-based BCI was coupled with real-time VR biofeedback congruent with the MI task. Over multiple training sessions in laboratory conditions, five unimpaired individuals progressively learnt to improve control over their EEG during MI virtual walking, corresponding with increased BCI classification accuracy. Further, similar improvements were found with four individuals with chronic complete spinal cord injury (SCI) using the system in real-world neurorehabilitation settings. These findings demonstrate that unimpaired and SCI impaired individuals learnt to control their sensorimotor EEG associated with MI tasks through VR-mediated BCI training, which was associated with improved BCI classification accuracy. Our findings highlight the potential of VR-mediated BCIs in enhancing neuromodulation, providing a foundation for future rehabilitation therapies.
Spinal cord injury disrupts the communication between the brain and the spinal circuits that orchestrate movement. To bypass the lesion, brain–computer interfaces have directly linked cortical activity to electrical stimulation of muscles, and have thus restored grasping abilities after hand paralysis. Theoretically, this strategy could also restore control over leg muscle activity for walking. However, replicating the complex sequence of individual muscle activation patterns underlying natural and adaptive locomotor movements poses formidable conceptual and technological challenges. Recently, it was shown in rats that epidural electrical stimulation of the lumbar spinal cord can reproduce the natural activation of synergistic muscle groups producing locomotion. Here we interface leg motor cortex activity with epidural electrical stimulation protocols to establish a brain–spine interface that alleviated gait deficits after a spinal cord injury in non-human primates. Rhesus monkeys (Macaca mulatta) were implanted with an intracortical microelectrode array in the leg area of the motor cortex and with a spinal cord stimulation system composed of a spatially selective epidural implant and a pulse generator with real-time triggering capabilities. We designed and implemented wireless control systems that linked online neural decoding of extension and flexion motor states with stimulation protocols promoting these movements. These systems allowed the monkeys to behave freely without any restrictions or constraining tethered electronics. After validation of the brain–spine interface in intact (uninjured) monkeys, we performed a unilateral corticospinal tract lesion at the thoracic level. As early as six days post-injury and without prior training of the monkeys, the brain–spine interface restored weight-bearing locomotion of the paralysed leg on a treadmill and overground. The implantable components integrated in the brain–spine interface have all been approved for investigational applications in similar human research, suggesting a practical translational pathway for proof-of-concept studies in people with spinal cord injury.
A reliable digital bridge restored communication between the brain and spinal cord and enabled natural walking in a participant with spinal cord injury. A spinal cord injury interrupts the communication between the brain and the region of the spinal cord that produces walking, leading to paralysis^ 1 , 2 . Here, we restored this communication with a digital bridge between the brain and spinal cord that enabled an individual with chronic tetraplegia to stand and walk naturally in community settings. This brain–spine interface (BSI) consists of fully implanted recording and stimulation systems that establish a direct link between cortical signals^ 3 and the analogue modulation of epidural electrical stimulation targeting the spinal cord regions involved in the production of walking^ 4 – 6 . A highly reliable BSI is calibrated within a few minutes. This reliability has remained stable over one year, including during independent use at home. The participant reports that the BSI enables natural control over the movements of his legs to stand, walk, climb stairs and even traverse complex terrains. Moreover, neurorehabilitation supported by the BSI improved neurological recovery. The participant regained the ability to walk with crutches overground even when the BSI was switched off. This digital bridge establishes a framework to restore natural control of movement after paralysis.
Objective: The purpose of this study is to detect and evaluate the brain and spinal cord conduction function in rats with functional electrical stimulation (FES) technique, and to provide a practical simple atlas for electrode implantation in the microelectronic neural bridge system.Methods: FES was performed on the brain and spinal cord of 16 SD rats, and the normalized coordinate spinal signals of nerve and the types of evoked motion were recorded.Results: ➀ The FES technique could activate the core area of spinal cord and induced the lower limb movement of key muscle. ➁The spinal cord conduction function of the rat primary motor cortex nerves in the brain and spinal cord could be assessed by Cerebus system.Conclusions: FES can activate multiple sets of spinal nerve fibers and to complete special action.
The spinal cord and its interactions with the brain are fundamental for movement control and somatosensation. However, brain and spinal cord electrophysiology in humans have largely been treated as distinct enterprises, in part due to the relative inaccessibility of the spinal cord. Consequently, there is a dearth of knowledge on human spinal electrophysiology, including the multiple pathologies of the central nervous system that affect the spinal cord as well as the brain. Here we exploit recent advances in the development of wearable optically pumped magnetometers (OPMs) which can be flexibly arranged to provide coverage of both the spinal cord and the brain concurrently in unconstrained environments. Our system for magnetospinoencephalography (MSEG) measures both spinal and cortical signals simultaneously by employing a custom-made spinal scanning cast. We evidence the utility of such a system by recording simultaneous spinal and cortical evoked responses to median nerve stimulation, demonstrating the novel ability for concurrent non-invasive millisecond imaging of brain and spinal cord.
An accurate classification of upper limb movements using electroencephalogram (EEG) signals is gaining significant importance in recent years due to the prevalence of brain-computer interfaces. The upper limbs in the human body are crucial since different skeletal segments combine to make a range of motions that helps us in our trivial daily tasks. Decoding EEG-based upper limb movements can be of great help to people with spinal cord injury (SCI) or other neuro-muscular diseases such as amyotrophic lateral sclerosis (ALS), primary lateral sclerosis, and periodic paralysis. This can manifest in a loss of sensory and motor function, which could make a person reliant on others to provide care in day-to-day activities. We can detect and classify upper limb movement activities, whether they be executed or imagined using an EEG-based brain-computer interface (BCI). Toward this goal, we focus our attention on decoding movement execution (ME) of the upper limb in this study. For this purpose, we utilize a publicly available EEG dataset that contains EEG signal recordings from fifteen subjects acquired using a 61-channel EEG device. We propose a method to classify four ME classes for different subjects using spectrograms of the EEG data through pre-trained deep learning (DL) models. Our proposed method of using EEG spectrograms for the classification of ME has shown significant results, where the highest average classification accuracy (for four ME classes) obtained is 87.36%, with one subject achieving the best classification accuracy of 97.03%.Clinical relevance— This research shows that movement execution of upper limbs is classified with significant accuracy by employing a spectrogram of the EEG signals and a pre-trained deep learning model which is fine-tuned for the downstream task.
This work presents the design, implementation, and feasibility evaluation of a Brain-Computer Interface (BCI) based in Motor Imagery (MI) developed to control a Functional Electrical Stimulation (FES) device. The aim of this system is to assist the upper limb motor recovery of patients with spinal cord injury (SCI). With this BCI-controlled FES system, the user performs open and close MI with either the left or right hand, which if detected is used to provide visual feedback and electroestimulation to muscles in the forearm to perform the corresponding grasping movement. The system was evaluated with seven healthy subjects (HS group) and two SCI patients (SC group) in several experimental sessions across different days. Each experimental session consisted of a training routine devoted to collect calibration EEG data to train the BCI machine learning model, and of a validation routine devoted to validate system in online operation. The online system validation showed an accuracy of the recognition of the MI task that ranged between 78% and 81% for HS participants and between 63% and 93% for SCI participants. Additionally, the time taken by the BCI system to trigger the activation of the FES device ranged between 7.05 and 7.29 s for HS participants and between 8.43 s and 13.91 s for SCI participants. Finally, significant negative correlations were observed (r = -0.418, p = 0.024 and r = -0.437, p = 0.018 for left and right hand MI conditions, respectively) between the online BCI performance with a quantitative EEG parameter based on event-related desynchronization/synchronization analysis. The results of this work indicate the feasibility of the proposed BCI coupled to a FES device to be used for SCI patients with a severe level of disability and provides evidence of the functionality of the proposed BCI system in a motor rehabilitation context.
Objective. In people with a cervical spinal cord injury (SCI) or degenerative diseases leading to limited motor function, restoration of upper limb movement has been a goal of the brain-computer interface field for decades. Recently, research from our group investigated non-invasive and real-time decoding of continuous movement in able-bodied participants from low-frequency brain signals during a target-tracking task. To advance our setup towards motor-impaired end users, we consequently chose a new paradigm based on attempted movement. Approach. Here, we present the results of two studies. During the first study, data of ten able-bodied participants completing a target-tracking/shape-tracing task on-screen were investigated in terms of improvements in decoding performance due to user training. In a second study, a spinal cord injured participant underwent the same tasks. To investigate the merit of employing attempted movement in end users with SCI, data of the spinal cord injured participant were recorded twice; once within an observation-only condition, and once while simultaneously attempting movement. Main results. We observed mean correlations well above chance level for continuous motor decoding based on attempted movement in able-bodied participants. Additionally, no global improvement over three sessions within five days, both in sensor and in source space, could be observed across all participants and movement parameters. In the participant with SCI, decoding performance well above chance was found. Significance. No presence of a learning effect in continuous attempted movement decoding in able-bodied participants could be observed. In contrast, non-significantly varying decoding patterns may promote the use of source space decoding in terms of generalized decoders utilizing transfer learning. Furthermore, above-chance correlations for attempted movement decoding ranging between those of observation only and executed movement were seen in one spinal cord injured participant, suggesting attempted movement decoding as a possible link between feasibility studies in able-bodied and actual applications in motor impaired end users.
Brain-computer interfaces (BCIs) are an emerging strategy for spinal cord injury (SCI) intervention that may be used to reanimate paralyzed limbs. This approach requires decoding movement intention from the brain to control movement-evoking stimulation. Common decoding methods use spike-sorting and require frequent calibration and high computational complexity. Furthermore, most applications of closed-loop stimulation act on peripheral nerves or muscles, resulting in rapid muscle fatigue. Here we show that a local field potential-based BCI can control spinal stimulation and improve forelimb function in rats with cervical SCI. We decoded forelimb movement via multi-channel local field potentials in the sensorimotor cortex using a canonical correlation analysis algorithm. We then used this decoded signal to trigger epidural spinal stimulation and restore forelimb movement. Finally, we implemented this closed-loop algorithm in a miniaturized onboard computing platform. This Brain-Computer-Spinal Interface (BCSI) utilized recording and stimulation approaches already used in separate human applications. Our goal was to demonstrate a potential neuroprosthetic intervention to improve function after upper extremity paralysis.
Restoring lower-limb function in patients with severe spinal cord injury (SCI) remains challenging. Spinal cord stimulation may enhance and reinstate lower-limb movements, but it is either used in open-loop control or its control depends upon residual motor functions, limiting its applicability in severely paralyzed individuals. The decoding of motor intentions from cortical signals may provide an interesting alternative in such cases. Electroencephalography (EEG) is an ideal solution since it is noninvasive and has been employed diffusely in the past to decode upper-limb movement intentions. Nonetheless, its application in lower-limb control remains underexplored. In this study, we investigated whether EEG can be used to decode lower-limb movement correlates in four SCI patients with varying injury severity during attempted left/right hip flexion or knee extension across four experimental sessions. We performed statistical analysis of event-related desynchronization/synchronization and machine learning classification to evaluate single and multi-window decoding performance. Our results suggest that EEG signals can often differentiate lower-limb movement attempts from rest, whereas decoding of left vs right and hip vs knee movements was more elusive. Left vs right decoding accuracy was improved through multi-window decoding, showing multiple sessions with above-chance results. In one patient, it was possible to attain above-chance three-class decoding (left/right/rest). Discriminating hip and knee movements proved more challenging. These findings establish a baseline for EEG decoding of lower-limb motor attempts in severely paralyzed individuals and pave the way for the development of brain-controlled neuroprosthetic systems.
Objective. Deep brain stimulation (DBS) is a well-established treatment for essential tremor, but may not be an optimal therapy, as it is always on, regardless of symptoms. A closed-loop (CL) DBS, which uses a biosignal to determine when stimulation should be given, may be better. Cortical activity is a promising biosignal for use in a closed-loop system because it contains features that are correlated with pathological and normal movements. However, neural signals are different across individuals, making it difficult to create a ‘one size fits all’ closed-loop system. Approach. We used machine learning to create a patient-specific, CL DBS system. In this system, binary classifiers are used to extract patient-specific features from cortical signals and determine when volitional, tremor-evoking movement is occurring to alter stimulation voltage in real time. Main results. This system is able to deliver stimulation up to 87%–100% of the time that subjects are moving. Additionally, we show that the therapeutic effect of the system is at least as good as that of current, continuous-stimulation paradigms. Significance. These findings demonstrate the promise of CL DBS therapy and highlight the importance of using subject-specific models in these systems.
The implementation of low-dimensional movement control by the central nervous system has been debated for decades. In this study, we investigated the dimensionality of the control signals received by spinal motor neurons when controlling either the ankle or knee joint torque. We first identified the low-dimensional latent factors underlying motor unit activity during torque-matched isometric contractions in male participants. Subsequently, we evaluated the extent to which motor units could be independently controlled. To this aim, we used an online control paradigm in which participants received the corresponding motor unit firing rates as visual feedback. We identified two main latent factors, regardless of the muscle group (vastus lateralis-medialis and gastrocnemius lateralis-medialis). The motor units of the gastrocnemius lateralis could be controlled largely independently from those of the gastrocnemius medialis during ankle plantarflexion. This dissociation of motor unit activity imposed similar behavior to the motor units that were not displayed in the feedback. Conversely, it was not possible to dissociate the activity of the motor units between the vastus lateralis and medialis muscles during the knee extension tasks. These results demonstrate that the number of latent factors estimated from linear dimensionality reduction algorithms does not necessarily reflect the dimensionality of volitional control of motor units. Overall, individual motor units were never controlled independently of all others but rather belonged to synergistic groups. Together, these findings provide evidence for a low-dimensional control of motor units constrained by common inputs, with notable differences between muscle groups.
Precise movement requires integrating descending motor control with sensory feedback. Sensory networks interact strongly with descending motor circuits within the spinal cord. We targeted this interaction by pairing stimulation of the motor cortex with coordinated stimulation of the cervical spinal cord. We used separate non-invasive and epidural experiments to test the hypothesis that the strongest muscle response would occur when paired brain and spinal cord stimuli simultaneously converge within the spinal cord. For non-invasive experiments, we measured arm and hand muscle motor evoked potentials (MEPs) in response to transcranial magnetic stimulation (TMS) and transcutaneous spinal cord stimulation (TSCS) in 16 individuals with chronic spinal cord injury (SCI) and 15 uninjured individuals. We compared this noninvasive approach to intraoperative paired stimulation experiments using dorsal epidural electrodes in 38 individuals undergoing surgery for cervical myelopathy. We observed augmented muscle responses to suprathreshold TMS when subthreshold TSCS stimuli were timed to converge synchronously in the spinal cord. At convergent timing, target muscle MEPs increased by 11.0% overall (13.3% in people with SCI, 6.2% in uninjured individuals) compared to non-convergent time intervals. Facilitation correlated with TSCS intensity, with intensity close to movement threshold being most effective. Facilitation did not correlate with SCI level or severity, indicating spared circuits were sufficient for this effect. Noninvasive pairing produced less facilitation compared to intraoperative (epidural) pairing. Thus, sensorimotor interactions in the cervical spinal spinal cord can be targeted with paired stimulation in health and after SCI.
The task-dependent frequency of common neural drive to muscles has important applications for motor rehabilitation therapies. While it is well established that muscle dynamics influence the synchronicity of neural drive, the modulation of this coherence between static and dynamic movements remains unclear. Transcutaneous electrical spinal cord stimulation (TESCS) is believed to enhance spinal cord excitability, potentially improving brain-muscle communication; however, its effect on common neural drive to muscles has not yet been reported. This study aimed to investigate differences in intermuscular coherence (IMC) frequency between static and dynamic movement tasks and determine whether it is feasible to enhance it by TESCS. Participants performed static and dynamic hand grip tasks at different timepoints with respect to stimulation, set to 80% tolerable intensity. Surface EMG signals were recorded from the flexor digitorum superficialis (FDS) and extensor digitorum communis (EDC) muscles during each trial to determine beta- (15–30 Hz) and gamma- (30–48 Hz) band intermuscular coherence. The sum of IMC (IMCarea) was significantly greater (pB = 0.018, pD = 0.0183, pIM = 0.0172, p5 = 0.0206, p10 = 0.0183, p15 = 0.0172) in the gamma-band for the dynamic task compared to the static task at every timepoint (before TESCS, during TESCS and immediately, 5-min, 10-min, and 15-min after TESCS) which may reflect a mechanism of increased efficiency of corticospinal interactions and could have implications for the types of movements that should be performed while receiving TESCS. There was no immediate measurable effect of TESCS on IMCarea at any timepoint in the beta-band (p = 0.25, p = 0.31) or gamma-band (p = 0.52, p = 0.73) for either the static or dynamic task respectively. This could be explained by corticospinal networks already working at maximum capacity in able-bodied individuals or that a longer duration of TESCS is required to elicit a measurable effect. While the intra-task difference in beta- and gamma-band IMCarea between static and dynamic tasks was statistically significant (pIM = 0.0275, p5 = 0.0275, p15 = 0.0031) at timepoints after stimulation, we did not find direct evidence that TESCS influenced this beta-gamma interaction. Thus, further investigation is needed to establish any causal relationship.
No abstract available
Spinal Cord Injury (SCI) is a debilitating condition that can affect the motor and sensory functions of the body. Exploring the use of Electroencephalography (EEG) signals to evaluate the recovery of motor function in patients with SCI has drawn increasing attention in recent years. The aim of the study was to develop a quantitative framework for assessing motor function recovery in spinal cord injury (SCI) patients by analyzing electroencephalography (EEG) signals, with a specific focus on peak-to-peak (P-P) values of the power spectral density (PSD). The EEG data were collected from ten SCI patients and ten healthy controls during upper-limb motor tasks using 64 channels, with six channels selected for in-depth investigation. The signals were processed using a 2nd order Butterworth bandpass filter (4-45 Hz), baseline wandering removal, and notch filter. We identified key parameters such as the median, mean, skewness, kurtosis, standard deviation, and power spectral density (PSD). PSD peak-to-peak (P-P) values were used to categorize data and assess motor function recovery. Significant differences in P-P values (P=0.0091) were found between healthy controls and SCI patients across six brain regions, indicating changed neural patterns linked with motor function recovery in SCI patients. Monitoring P-P accuracy over time aids in evaluating treatment efficacy and provides a vital tool for improving patient care and rehabilitation techniques.
Traditional machine learning methods struggle with efficiency when processing large-scale data, while deep learning approaches, such as convolutional neural networks (CNN) and long short-term memory networks (LSTM), exhibit certain limitations when handling long-duration sequences. The choice of convolutional kernel size needs to be determined after several experiments, and LSTM has difficulty capturing effective information from long-time sequences. In this paper, we propose a transfer learning (TL) method based on Transformer, which constructs a new network architecture for feature extraction and classification of electroencephalogram (EEG) signals in the time-space domain, named TS-former. The frequency and spatial domain information of EEG signals is extracted using the Filter Bank Common Spatial Pattern (FBCSP), and the resulting features are subsequently processed by the Transformer to capture temporal patterns. The input features are processed by the Transformer using a multi-head attention mechanism, and the final classification outputs are generated through a fully connected layer. A classification model is pre-trained using fine-tuning techniques. When performing a new classification task, only some layers of the model are modified to adapt it to the new data and achieve good classification results. The experiments are conducted on a motor imagery (MI) EEG dataset from 16 spinal cord injury (SCI) patients. After training the model using a ten-time ten-fold cross-validation method, the average classification accuracy reached 95.09%. Our experimental results confirm a new approach to build a brain-computer interface (BCI) system for rehabilitation training of SCI patients.
Robotic systems, such as Lokomat® have shown promising results in people with severe motor impairments, who suffered a stroke or other neurological damage. Robotic devices have also been used by people with more challenging damages, such as Spinal Cord Injury (SCI), using feedback strategies that provide information about the brain activity in real-time. This study proposes a novel Motor Imagery (MI)-based Electroencephalogram (EEG) Visual Neurofeedback (VNFB) system for Lokomat® to teach individuals how to modulate their own <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula> (8-12 Hz) and <inline-formula> <tex-math notation="LaTeX">$\beta $ </tex-math></inline-formula> (15-20 Hz) rhythms during passive walking. Two individuals with complete SCI tested our VNFB system completing a total of 12 sessions, each on different days. For evaluation, clinical outcomes before and after the intervention and brain connectivity were analyzed. As findings, the sensitivity related to light touch and painful discrimination increased for both individuals. Furthermore, an improvement in neurogenic bladder and bowel functions was observed according to the American Spinal Injury Association Impairment Scale, Neurogenic Bladder Symptom Score, and Gastrointestinal Symptom Rating Scale. Moreover, brain connectivity between different EEG locations significantly (<inline-formula> <tex-math notation="LaTeX">${p}\lt 0.05$ </tex-math></inline-formula>) increased, mainly in the motor cortex. As other highlight, both SCI individuals enhanced their <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula> rhythm, suggesting motor learning. These results indicate that our gait training approach may have substantial clinical benefits in complete SCI individuals.
No abstract available
Background Spinal cord injury (SCI) may lead to impaired motor function, autonomic nervous system dysfunction, and other dysfunctions. Brain-computer Interface (BCI) system based on motor imagery (MI) can provide more scientific and effective treatment solutions for SCI patients. Methods According to the interaction between brain regions, a coherence-based graph convolutional network (C-GCN) method is proposed to extract the temporal-frequency-spatial features and functional connectivity information of EEG signals. The proposed algorithm constructs multi-channel EEG features based on coherence networks as graphical signals and then classifies MI tasks. Different from the traditional graphical convolutional neural network (GCN), the C-GCN method uses the coherence network of EEG signals to determine MI-related functional connections, which are used to represent the intrinsic connections between EEG channels in different rhythms and different MI tasks. EEG data of SCI patients and healthy subjects have been analyzed, where healthy subjects served as the control group. Results The experimental results show that the C-GCN method can achieve the best classification performance with certain reliability and stability, the highest classification accuracy is 96.85%. Conclusion The proposed framework can provide an effective theoretical basis for the rehabilitation treatment of SCI patients.
Neurogenic bladder (NB) dysfunction in individuals with complete spinal cord injury (SCI) is a condition that significantly affects quality of life. Despite the prevalence of interventions, there is a substantial gap in effective treatments for this dysfunction. This study proposes robotic-assisted gait training combined with motor imagery (MI)-based brain-computer interface (BCI) to induce improved cortical modulation, and consequently improve bladder function in patients with SCI. The study involved seven men with complete and chronic SCI in a protocol comprising 24 sessions of robotic-assisted walking with BCI and MI. This regimen was designed to teach both mu (µ, 8–12 Hz) and beta (β, 15–20 Hz) modulation through MI practices using multi-channel EEG neurofeedback (NFB), focusing on sensorimotor rhythm (SMR) activation. Clinical outcomes were measured using the neurogenic bladder symptom score (NBSS), which revealed substantial improvements in bladder control among participants. EEG analysis confirmed a significant correlation between modulation of µ and β rhythms with decreased NBSS scores. Our findings support that robotic-assisted gait training combined with MI-based BCI effectively modulates with more precision the cortical µ and β rhythms and improves NB dysfunction in SCI individuals.
No abstract available
Objective. Electroencephalogram (EEG) signals exhibit temporal–frequency–spatial multi-domain feature, and due to the nonplanar nature of the brain surface, the electrode distributions follow non-Euclidean topology. To fully resolve the EEG signals, this study proposes a temporal–frequency–spatial multi-domain feature fusion graph attention network (GAT) for motor imagery (MI) intention recognition in spinal cord injury (SCI) patients. Approach. The proposed model uses phase-locked value (PLV) to extract spatial phase connectivity information between EEG channels and continuous wavelet transform to extract valid EEG information in the time–frequency domain. It then models as a graph data structure containing multi-domain information. The gated recurrent unit and GAT learn EEG’s dynamic temporal–spatial information. Finally, the fully connected layer outputs the MI intention recognition results. Main results. After 10 times 10-fold cross-validation, the proposed model can achieve an average accuracy of 95.82%. Furthermore, this study analyses the event-related desynchronization/event-related synchronization and PLV brain network to explore the brain activity of SCI patients during MI. Significance. This study confirms the potential of the proposed model in terms of EEG decoding performance and provides a reference for the mechanism of neural activity in SCI patients.
Chronic spinal cord injury (SCI) patients present poor motor cortex activation during movement attempts. The reactivation of this brain region can be beneficial for them, for instance, allowing them to use brain-machine interfaces for motor rehabilitation or restoration. These brain-machine interfacess generally use electroencephalography (EEG) to measure the cortical activation during the attempts of movement, quantifying it as the event-related desynchronization (ERD) of the alpha/mu rhythm. Based on previous evidence showing that higher tonic EEG alpha power is associated with higher ERD, we hypothesized that artificially increasing the alpha power over the motor cortex of these patients could enhance their ERD (ie, motor cortical activation) during movement attempts. We used EEG neurofeedback (NF) to enhance the tonic EEG alpha power, providing real-time visual feedback of the alpha oscillations measured over the motor cortex. This approach was evaluated in a C4, ASIA A, SCI patient (9 months after the injury) who did not present ERD during the movement attempts of his paralyzed hands. The patient performed 4 NF sessions (in 4 consecutive days) and screenings of his EEG activity before and after each session. After the intervention, the patient presented a significant increase in the alpha power over the motor cortex, and a significant enhancement of the mu ERD in the contralateral motor cortex when he attempted to close the assessed right hand. As a proof of concept investigation, this article shows how a short NF intervention might be used to enhance the motor cortical activation in patients with chronic tetraplegia.
Brain computer interfaces (BCIs) are thought to revolutionize rehabilitation after SCI, e.g., by controlling neuroprostheses, exoskeletons, functional electrical stimulation, or a combination of these components. However, most BCI research was performed in healthy volunteers and it is unknown whether these results can be translated to patients with spinal cord injury because of neuroplasticity. We sought to examine whether high-density EEG (HD-EEG) could improve the performance of motor-imagery classification in patients with SCI. We recorded HD-EEG with 256 channels in 22 healthy controls and 7 patients with 14 recordings (4 patients had more than one recording) in an event related design. Participants were instructed acoustically to either imagine, execute, or observe foot and hand movements, or to rest. We calculated Fast Fourier Transform (FFT) and full frequency directed transfer function (ffDTF) for each condition and classified conditions pairwise with support vector machines when using only 2 channels over the sensorimotor area, full 10-20 montage, high-density montage of the sensorimotor cortex, and full HD-montage. Classification accuracies were comparable between patients and controls, with an advantage for controls for classifications that involved the foot movement condition. Full montages led to better results for both groups (p < 0.001), and classification accuracies were higher for FFT than for ffDTF (p < 0.001), for which the feature vector might be too long. However, full-montage 10–20 montage was comparable to high-density configurations. Motor-imagery driven control of neuroprostheses or BCI systems may perform as well in patients as in healthy volunteers with adequate technical configuration. We suggest the use of a whole-head montage and analysis of a broad frequency range.
Recent advances in neuroprostheses provide us with promising ideas of how to improve the quality of life in people suffering from impaired motor functioning of upper and lower limbs. Especially for patients after spinal cord injury (SCI), futuristic devices that are controlled by thought via brain-computer interfaces (BCIs) might be of tremendous help in managing daily tasks and restoring at least some mobility. However, there are certain problems arising when trying to implement BCI technology especially in such a heterogenous patient group. A plethora of processes occurring after the injuries change the brain's structure as well as its functionality collectively referred to as neuroplasticity. These changes are very different between individuals, leading to an increasing interest to reveal the exact changes occurring after SCI. In this study we investigated event-related potentials (ERPs) derived from electroencephalography (EEG) signals recorded during the (attempted) execution and imagination of hand and foot movements in healthy subjects and patients with SCI. As ERPs and especially early components are of interest for BCI research we aimed to investigate differences between 22 healthy volunteers and 7 patients (mean age = 51.86, SD = 15.49) suffering from traumatic or non-traumatic SCI since 2–314 months (mean = 116,57, SD = 125,55). We aimed to explore differences in ERP responses as well as the general presence of component that might be of interest to further consider for incorporation into BCI research. In order to match the real-life situation of BCIs for controlling neuroprostheses, we worked on small trial numbers (<25), only. We obtained a focal potential over Pz in ten healthy participants but in none of the patients after lenient artifact rejection. The potential was characterized by a high amplitude, it correlated with the repeated movements (6 times in 6 s) and in nine subjects it significantly differed from a resting condition. Furthermore, there are strong arguments against possible confounding factors leading to the potential's appearance. This phenomenon, occurring when movements are repeatedly conducted, might represent a possible potential to be used in futuristic BCIs and further studies should try to investigate the replicability of its appearance.
Background Bimanual motor training is an effective neurological rehabilitation strategy. However, its use has rarely been investigated in patients with paralysis caused by spinal cord injury (SCI). Therefore, we conducted a case study to investigate the effects of robot-assisted task-oriented bimanual training (RBMT) on upper limb function, activities of daily living, and movement-related sensorimotor activity in a patient with SCI. Methods A patient with bilateral upper limb paresis due to incomplete cervical SCI underwent 20 sessions of RBMT. Functional recovery was measured using clinical scales for upper limb motor function and activities of daily living. Training-induced neuroplasticity was evaluated using event-related desynchronization (ERD) induced by movement of the right hand (the more affected side), recorded on the electroencephalogram (EEG). Results RBMT improved the patient’s upper limb motor function and activity independence. At baseline, our EEG paradigm demonstrated an ipsilateral predominance of movement-related ERD responses over the sensorimotor cortex (SMC) in relation to the moving hand. Following the RBMT, the ERD pattern shifted from being predominantly ipsilateral to a contralateral allocation. Conclusion The present case study provides preliminary evidence to support the therapeutic use of RBMT to restore upper limb function in patients with incomplete SCI. The recovery of function following SCI might be related to the rebalancing of sensorimotor activation.
Motor imagery-based brain-computer interfaces (MI-BCIs) hold significant promise for rehabilitation training in individuals with neurological impairments such as stroke and spinal cord injury (SCI). Achieving precise and robust lower limb movement prediction for each patient is crucial. However, the variability in MI response frequencies and brain activation patterns among subjects presents a great challenge to the generalizability of MI-BCIs. This paper proposes a Tuned Heuristic Fusion Graph Convolutional Network (THFGCN) for limb movement prediction in rehabilitation scenarios. THFGCN innovatively designs a learnable EEG frequency band tuned module and a heuristic space topology module. These two modules allow for the intricate extraction of both frequency and spatial topological features, utilizing graph adjacency matrices that encapsulate channel correlations and spatial relationships, hence fostering individualized analysis and enhanced generalizability across subjects. Furthermore, a spatio-temporal convolution module paired with a feature map attention mechanism is proposed to extract the critical spatio-temporal features of electroencephalogram (EEG) data. Validation experiments on the PhysioNet and LLM-BCImotion datasets against six mainstream methods demonstrate that THFGCN outperforms state-of-the-art methods, achieving 88.41% and 82.82% accuracy in the within-subject case, and 65.93% and 60.56% accuracy in the cross-subject case, respectively. Detailed frequency band weight and T-distributed Stochastic Neighbor Embedding visualization validate the effectiveness of proposed modules. Furthermore, feature interpretability analysis proves the extracted features’ profound MI task relevance, underlining THFGCN’s exceptional interpretability. Note to Practitioners—Motor imagery-based brain-computer interfaces (MI-BCIs) have shown great potential in rehabilitation training for patients with neurological disorders such as stroke and spinal cord injury (SCI), due to their capacity to promote neural plasticity and functional recovery. An accurate and generalized MI-BCI can accelerate the neural remodeling process in patients and promote the clinical application of BCI technology. In this paper, we propose a new graph convolutional network (GCN)-based MI classification framework to predict limb movements during rehabilitation training. This framework incorporates a frequency tuned topology module and a heuristic space topology module to extract private frequency-domain topological features and shared spatial topological features during MI across subjects. Our method improves the generalization ability to new subjects while ensuring accurate limb movement prediction for single subjects. We show the superiority of our method on two MI datasets, showing that the extracted features have effective physiological interpretability. These findings suggest that the proposed method is not only effective but also transparent, and may support future research in understanding neural mechanisms and optimizing rehabilitation strategies for motor-impaired populations.
Accurate classification of motor imagery tasks is essential for enhancing brain-computer interfaces (BCIs) in spinal cord injury (SCI) rehabilitation. The use of raw time samples often overlooks temporal dependencies in electroencephalography ($\mathbf{E E G}$), limiting classification accuracy. We propose the Regularized Common Temporal Pattern (RCTempP), a novel time-domain feature extraction method that emphasizes discriminative temporal samples. RCTempP was evaluated on EEG recordings from SCI patients imagining five distinct hand movements within the $0.3-3 \text{Hz}$ band. Compared to raw time samples, Common Temporal Pattern (CTP), and Extended CTP (ECTP) approaches, RCTempP yielded statistically significant improvements in eight out of ten class pairs, for which the average accuracy across subjects ranged from $\mathbf{7 0. 4 \%}$ to $\mathbf{7 5. 1 \%}$. Performance was assessed using a fivefold cross-validation protocol to ensure robust and generalizable results. Importantly, RCTempP's significant temporal filters emerged post-cue onset and corresponded to movement-related cortical potential peaks. These findings highlight RCTempP's promise for advancing motor imagery BCIs in SCI rehabilitation.
Brain-Computer Interfaces (BCIs) are revolutionizing neurorehabilitation, providing crucial communication and control for individuals with severe motor impairments from conditions like ALS, spinal cord injuries, or stroke. By creating direct links between brain activity and external devices, BCIs bypass damaged neural pathways, thus restoring motor function and significantly enhancing quality of life. Electroencephalography (EEG) is a favored BCI modality due to its accessibility and cost-effectiveness. However, a major challenge lies in the substantial impact of cognitive and individual differences on motor imagery (MI) task performance and overall BCI accuracy. This research introduces a novel method to overcome these challenges, focusing on enhanced MI classification. Our approach synergistically integrates Common Spatial-Spectral Pattern filters with the Tunable-Q Wavelet Transform. This powerful combination was applied to the extensive CHO-2017 database (52 participants), which uniquely captures significant inter-individual cognitive variations, specifically to distinguish between left and right-hand MI tasks. A critical aspect of our method is the utilization of only the top 10 most discriminative features extracted through this hybrid technique. This deliberate streamlining maximizes classification efficacy while maintaining computational efficiency. This tailored feature set demonstrated remarkable effectiveness, performing across 99 % of participants. When integrated with a K-Nearest Neighbors classifier, this approach achieved an outstanding accuracy of 98.84 %, notably surpassing existing state-of-the-art methods in the field. These findings hold significant promise for developing more accurate and robust BCI systems capable of extracting optimal commands for diverse MI applications, ultimately advancing neurorehabilitation outcomes.
<italic>Objective:</italic> Motor Imagery (MI)-based Brain-Computer Interfaces (BCIs) have been proposed for the rehabilitation of people with disabilities, being a big challenge their successful application to restore motor functions in individuals with Spinal Cord Injury (SCI). This work proposes an Electroencephalography (EEG) gait imagery-based BCI to promote motor recovery on the Lokomat platform, in order to allow a clinical intervention by acting simultaneously on both central and peripheral nervous mechanisms. <italic>Methods:</italic> As a novelty, our BCI system accurately discriminates gait imagery tasks during walking and further provides a multi-channel EEG-based Visual Neurofeedback (VNFB) linked to <inline-formula><tex-math notation="LaTeX">$\mu$</tex-math></inline-formula> (8–12 Hz) and <inline-formula><tex-math notation="LaTeX">$\beta$</tex-math></inline-formula> (15–20 Hz) rhythms around Cz. VNFB is carried out through a cluster analysis strategy-based Euclidean distance, where the weighted mean MI feature vector is used as a reference to teach individuals with SCI to modulate their cortical rhythms. <italic>Results:</italic> The developed BCI reached an average classification accuracy of 74.4%. In addition, feature analysis demonstrated a reduction in cluster variance after several sessions, whereas metrics associated with self-modulation indicated a greater distance between both classes: passive walking with gait MI and passive walking without MI. <italic>Conclusion:</italic> The results suggest that intervention with a gait MI-based BCI with VNFB may allow the individuals to appropriately modulate their rhythms of interest around Cz. <italic>Significance:</italic> This work contributes to the development of advanced systems for gait rehabilitation by integrating Machine Learning and neurofeedback techniques, to restore lower-limb functions of SCI individuals.
Individuals who are suffering from the most severe of motor disabilities can improve their quality of life by controlling and directing mechanical and electronic devices. As for Spinal Cord Injured (SCI) patients', attempted hand movements can be classified using electroencephalography (EEG). The research aims to develop a hybrid CNN‐LSTM (Convolutional Neural Network—Long Short Term Memory) architecture for multichannel EEG signal classification. It is a challenging task to classify real‐world multichannel EEG data from SCI patients. The proposed research preprocessed the EEG data to improve the signal‐to‐noise ratio and arranged for them to extract additional information from the data. The preprocessing step includes filtering, downsampling, and artifact removal, while the postprocessing step includes time‐frequency representation and spatial information encoding. A hybrid CNN‐LSTM is used for feature extraction and classification. The proposed method has been implemented on a dataset consisting of 5 different classes of attempted hand movements from 10 SCI patients. The average classification accuracy of 92.36% is achieved for 5‐class classification. To check the global validity of the proposed network, the BCI competition IV data is classified by the proposed method and has found 92.70% overall accuracy.
Electroencephalography (EEG) is a non-invasive technique with high temporal resolution and cost-effective, portable, and easy-to-use features. Motor imagery EEG (MI-EEG) data classification is one of the key applications within brain–computer interface (BCI) systems, utilizing EEG signals from motor imagery tasks. BCI is very useful for people with severe mobility issues like quadriplegics, spinal cord injury patients, stroke patients, etc., giving them the freedom to a certain extent to perform activities without the need for a caretaker, like driving a wheelchair. However, motion artifacts can significantly affect the quality of EEG recordings. The conventional EEG enhancement algorithms are effective in removing ocular and muscle artifacts for a stationary subject but not as effective when the subject is in motion, e.g., a wheelchair user. In this research study, we propose an empirical error model-based artifact removal approach for the cross-subject classification of motor imagery (MI) EEG data using a modified CNN-based deep learning algorithm, designed to assist wheelchair users with severe mobility issues. The classification method applies to real tasks with measured EEG data, focusing on accurately interpreting motor imagery signals for practical application. The empirical error model evolved from the inertial sensor-based acceleration data of the subject in motion, the weight of the wheelchair, the weight of the subject, and the surface friction of the terrain under the wheelchair. Three different wheelchairs and five different terrains, including road, brick, concrete, carpet, and marble, are used for artifact data recording. After evaluating and benchmarking the proposed CNN and empirical model, the classification accuracy achieved is 94.04% for distinguishing between four specific classes: left, right, front, and back. This accuracy demonstrates the model’s effectiveness compared to other state-of-the-art techniques. The comparative results show that the proposed approach is a potentially effective way to raise the decoding efficiency of motor imagery BCI.
This study introduces an alternative approach to electroencephalography (EEG) time-frequency analysis based on time-varying autoregressive (TV-AR) models in a cascade configuration to independently monitor key EEG spectral components. The method is evaluated for its neurophysiological interpretation and effectiveness in motor-related brain-computer interface (BCI) applications. Specifically, we assess the ability of the tracked EEG poles to discriminate between rest, movement execution (ME) and movement imagination (MI) in healthy subjects, as well as movement attempts (MA) in individuals with spinal cord injury (SCI). Our results show that pole tracking effectively captures broad changes in EEG dynamics, such as transitions between rest and movement-related states. It outperformed traditional EEG-based features, increasing detection accuracy for ME by an average of 4.1% (with individual improvements reaching as high as 15%) and MI by an average of 4.5% (up to 13.8%) compared to time-domain low-frequency EEG features. Similarly, compared to alpha/beta band power, the method improved ME detection by an average of 5.9% (up to 10.4%) and MI by an average of 4.3% (up to 10.2%), with results averaged across 15 healthy participants. In one participant with SCI, pole tracking improved MA detection by 12.9% over low-frequency EEG features and 4.8% over alpha/beta band power. However, its ability to distinguish finer movement details within specific movement types was limited. Additionally, the temporal evolution of the extracted pole tracking features revealed event-related desynchronization phenomena, typically observed during ME, MA and MI, as well as increases in frequency, which are of neurophysiological interest.
Objective. The spinal cord is a vital part of the central nervous system, and its neural signals offer valuable insight into sensory and motor function. Accurate localization of the neural sources that generate spinal cord potentials (SCPs) is essential for advancing both basic research and clinical applications. This study aims to assess the feasibility of applying established EEG source localization methods to simulated SCP data. Approach. We constructed a biophysical model of the upper body and head to simulate surface potentials generated by dipolar sources within the gray matter of the cervical spinal cord. Electrodes were distributed around the neck and upper back to capture these signals. Inverse solutions were obtained using established source localization methods, including sLORETA, and performance was evaluated across varying signal-to-noise ratios (SNRs), electrode layouts, and anatomical model variants. Main results. Regularization parameters between 1×10−4 and 1×10−1 yielded the lowest errors, depending on SNR. Under these conditions, predicted source locations were typically within 10 mm of the true source. Higher SNR levels favored larger regularization values. Localization accuracy improved with increasing electrode density, though performance gains plateaued beyond approximately 50% coverage of the neck circumference. Significance. These results demonstrate that established source localization methods can be adapted for spinal cord applications in simulation. The findings highlight the importance of both regularization and sensor configuration, providing a foundation for future improvements in inverse modeling and experimental validation with real SCP recordings.
No abstract available
The number of people in the world who suffer from a stroke, multiple sclerosis and spinal cord injuries is estimated to reach 2 to 4 percent of the population, often with severe motor disabilities. Electroencephalogram (EEG) based brain-computer interfaces (BCIs) are an effective way of communicating with a person by translating brain signals to meaningful control commands. Motor imagery (MI) based BCIs allow users to control external devices by imagining certain movements. Although conventional methods, such as common spatial pattern (CSP) and power spectral density (PSD), have shown good performance, they may not be able to capture complementary time-frequency and nonlinear characteristics of EEG signals. This study introduces a robust multi-domain framework for four class MI classification using right hand, left hand, foot and tongue imagery. The proposed system uses a combination of CSP, PSD, log variance, wavelet transform and bispectrum analysis to extract the spatial, spectral, temporal and nonlinear features. The one vs. one support vector machine (OVO-SVM) classifier is used to improve the class separability and to address the class imbalance. In addition to classification offline, an end-to-end EEG-driven control architecture is developed, in which decoded motor imagery decisions are directly mapped into coordinated control commands for two independent motors, allowing for continuous translation of cognitive intentions into physical motion. Experimental results show that the performance is good for all subjects, and Subject 3 shows the highest accuracy (95.83%) and Kappa (0.94), while the average accuracy and Kappa are 74.3% and 0.65, respectively. System-Level Validation with Independent Evaluation Dataset Stable temporal prediction behaviour Reliable motor command generation Coordination of multi-motor control Coordination of spatial motion trajectories Stable temporal prediction behaviour Reliable motor command generation Coordination of multi-motor control Coordination of spatial motion trajectories Bridging the Gap between EEG- based Classification and Practical Motor Control Implementation These findings reveal the great potential of the proposed framework for real world MI-based BCI applications.
Brain-computer interface (BCI) technology has significant applications in neuro rehabilitation and motor function restoration, especially for patients with stroke or spinal cord injury. Motor imagery electroencephalog-raphy (MI-EEG) is widely used in BCIs, but its nonlinear dynamics and inter-subject variability limit decoding accuracy. In this paper, a multiscale hybrid attention network (MSHANet) for MI-EEG decoding, which consists of spatiotemporal feature extraction (STFE), talking head self-attention (THSA), dynamic squeeze-and-excitation attention (DSEA), and a temporal convolutional network (TCN), is proposed. MSHANet was evaluated via within-subject experiments using BCI Competition IV Datasets 2a and 2b, as well as EEGMMID, achieving decoding accuracies of 83.56%, 89.75%, and 75.66%, respectively. In cross-subject experiments on the three datasets, the mode lattained accuracies of 69.93% on BCI-2a, 81.85% on BCI-2b, and 79.67% on EEGMMID. In addition, we propose an electrode spatial structure-aware encoder. This technique encodes the spatial positions of electrodes in the original data, enabling the model to obtain richer spatial electrode information at the input stage. In within-subject and cross-subject tasks on BCI-2a, this encoding improved the decoding performance by 2.83% and 2.91%, respectively. Visualization was also employed to elucidate feature distributions and the effec tiveness of its attention mechanisms. Experimental results demonstrate that MSHANet performs exceptionally well in MI-EEG decoding tasks and has high potential for clinical applications, particularly in neurorehabilitation and motor function reconstruction.
No abstract available
Recently, social demands for a good quality of life have increased among the elderly and disabled people. So, biomedical engineers and robotic researchers aimed to fuse these techniques in a novel rehabilitation system. Moreover, these models utilized the biomedical signals acquired from the human body's particular organ, cells, or tissues. The human motion intention prediction mechanism plays an essential role in various applications, such as assistive and rehabilitation robots, that execute specific tasks among elders and physically impaired individuals. However, more complications are increased in the human–machine-based interaction techniques, creating more scope for personalized assistance for the human motion intention prediction system. Therefore, in this paper, an Adaptive Hybrid Network (AHN) is implemented for effective human motion intention prediction. Initially, multimodal data like electroencephalogram (EEG)/Electromyography (EMG) signals and sensor measures data are collected from the available data resource. The gathered EEG/EMG signals are then converted into spectrogram images and sent to AH-CNN-LSTM, which is the integration of an Adaptive Hybrid Convolution Neural Network (AH-CNN) with a Long Short-Term Memory (LSTM) network. Similarly, the data details of sensor measures are directly subjected to AH-CNN-Res-LSTM, which is the combination of Adaptive Hybrid CNN with Residual Network and LSTM (Res-LSTM) to get the predictive result. Further, to enhance the prediction, the parameters in both the AH-CNN-LSTM and AH-CNN-Res-LSTM techniques are optimized using the Improved Yellow Saddle Goatfish Algorithm (IYSGA). The efficiency of the implemented model is computed by conducting the comparison experiment of the proposed technique with other standard models. The performance outcome of the developed method outperformed the other traditional methods.
No abstract available
No abstract available
This study aims to classify rest and upper limb movements execution and intention using electroencephalogram (EEG) signals by developing machine-learning (ML) algorithms. Five different MLs are implemented, including k-Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), Naïve Bayes (NB), Support Vector Machine (SVM), and Random Forest (RF). The EEG data from fifteen healthy subjects during motor execution (ME) and motor imagination (MI) are preprocessed with Independent Component Analysis (ICA) to reduce eye-blinking associated artifacts. A sliding window technique varying from 1 s to 2 s is used to segment the signals. The majority voting (MV) strategy is employed during the post-processing stage. The results show that the application of ICA increases the accuracy of MI up to 6%, which is improved further by 1-2% using the MV (p<0.05). However, the improvement in the accuracies is more significant in MI (>5%) than in ME (<1%), indicating a more significant influence of eye-blinking artifacts in the EEG signals during MI than ME. Among the MLs, both RF and SVM consistently produced better accuracies in both ME and MI. Using RF, the 2 s window size produced the highest accuracies in both ME and MI than the smaller window sizes.
No abstract available
Brain–computer interfaces (BCIs) facilitate communication between the brain and external devices, providing an alternative solution for individuals with upper limb disabilities. The decoding of brain movement commands in BCIs relies on signal feature extraction and classification. Herein, the BNCI Horizon 2020 dataset is employed, which consists of electroencephalographic signals from ten participants with subacute and chronic cervical spinal cord injuries. These participants perform or attempt five distinct types of arm and hand movements. To extract signal features, a novel technique is introduced that estimates movement‐related cortical potentials and incorporates them into the processing pipeline. Moreover, a time‐frequency domain representation of the dataset is used as input for the classifier. Given the promising outcomes demonstrated by deep learning models in BCI classification, a pretrained ConvNet AlexNet is adopted to decode the motor tasks. The proposed method exhibits a remarkable average accuracy of 76.0% across all five categories, representing a significant advancement over existing state‐of‐the‐art techniques. Additionally, an in‐depth analysis of the convolutional layers in the model is conducted to gain comprehensive insights into the classification process. By examining the ConvNet filters and activations, the method contributes to a deeper understanding of the electrophysiology that underlies attempted movement.
Tetraplegia from spinal cord injury leaves many patients paralyzed below the neck, leaving them unable to perform most activities of daily living. Brain-machine interfaces (BMIs) could give tetraplegic patients more independence by directly utilizing brain signals to control external devices such as robotic arms or hands. The cortical grasp network has been of particular interest because of its potential to facilitate the restoration of dexterous object manipulation. However, a network that involves such high-level cortical areas may also provide additional information, such as the encoding of speech. Towards understanding the role of different brain areas in the human cortical grasp network, neural activity related to motor intentions for grasping and performing speech was recorded in a tetraplegic patient in the supramarginal gyrus (SMG), the ventral premotor cortex (PMv), and the somatosensory cortex (S1). We found that in high-level brain areas SMG and PMv, grasps were well represented by firing rates of neuronal populations already at visual cue presentation. During motor imagery, grasps could be significantly decoded from all brain areas. At identical neuronal population sizes, SMG and PMv achieved similar highly-significant decoding abilities, demonstrating their potential for grasp BMIs. During speech, SMG encoded both spoken grasps and colors, in contrast to PMv and S1, which were not able to significantly decode speech.These findings suggest that grasp signals can robustly be decoded at a single unit level from the cortical grasping circuit in human. Data from PMv suggests a specialized role in grasping, while SMG’s role is broader and extends to speech. Together, these results indicate that brain signals from high-level areas of the human cortex can be exploited for a variety of different BMI applications.
Modern robotic hands/upper limbs may replace multiple degrees of freedom of extremity function. However, their intuitive use requires a high number of control signals, which current man-machine interfaces do not provide. Here, we discuss a broadband control interface that combines targeted muscle reinnervation, implantable multichannel electromyographic sensors, and advanced decoding to address the increasing capabilities of modern robotic limbs. With targeted muscle reinnervation, nerves that have lost their targets due to an amputation are surgically transferred to residual stump muscles to increase the number of intuitive prosthetic control signals. This surgery re-establishes a nerve-muscle connection that is used for sensing nerve activity with myoelectric interfaces. Moreover, the nerve transfer determines neurophysiological effects, such as muscular hyper-reinnervation and cortical reafferentation that can be exploited by the myoelectric interface. Modern implantable multichannel EMG sensors provide signals from which it is possible to disentangle the behavior of single motor neurons. Recent studies have shown that the neural drive to muscles can be decoded from these signals and thereby the user's intention can be reliably estimated. By combining these concepts in chronic implants and embedded electronics, we believe that it is in principle possible to establish a broadband man-machine interface, with specific applications in prosthesis control. This perspective illustrates this concept, based on combining advanced surgical techniques with recording hardware and processing algorithms. Here we describe the scientific evidence for this concept, current state of investigations, challenges, and alternative approaches to improve current prosthetic interfaces.
One of the current challenges in human motor rehabilitation is the robust application of Brain-Machine Interfaces to assistive technologies such as powered lower limb exoskeletons. Reliable decoding of motor intentions and accurate timing of the robotic device actuation is fundamental to optimally enhance the patient's functional improvement. Several studies show that it may be possible to extract motor intentions from electroencephalographic (EEG) signals. These findings, although notable, suggests that current techniques are still far from being systematically applied to an accurate real-time control of rehabilitation or assistive devices. Here we propose the estimation of spinal primitives of multi-muscle control from EEG, using electromyography (EMG) dimensionality reduction as a solution to increase the robustness of the method. We successfully apply this methodology, both to healthy and incomplete spinal cord injury (SCI) patients, to identify muscle contraction during periodical knee extension from the EEG. We then introduce a novel performance metric, which accurately evaluates muscle primitive activations.
No abstract available
No abstract available
No abstract available
No abstract available
Objective. Recovery of voluntary gait after spinal cord injury (SCI) requires the restoration of effective motor cortical commands, either by means of a mechanical connection to the limbs, or by restored functional connections to muscles. The latter approach might use functional electrical stimulation (FES), driven by cortical activity, to restore voluntary movements. Moreover, there is evidence that this peripheral stimulation, synchronized with patients’ voluntary effort, can strengthen descending projections and recovery. As a step towards establishing such a cortically-controlled FES system for restoring function after SCI, we evaluate here the type and quantity of neural information needed to drive such a brain machine interface (BMI) in rats. We compared the accuracy of the predictions of hindlimb electromyograms (EMG) and kinematics using neural data from an intracortical array and a less-invasive epidural array. Approach. Seven rats were trained to walk on a treadmill with a stable pattern. One group of rats (n = 4) was implanted with intracortical arrays spanning the hindlimb sensorimotor cortex and EMG electrodes in the contralateral hindlimb. Another group (n = 3) was implanted with epidural arrays implanted on the dura overlying hindlimb sensorimotor cortex. EMG, kinematics and neural data were simultaneously recorded during locomotion. EMGs and kinematics were decoded using linear and nonlinear methods from multiunit activity and field potentials. Main results. Predictions of both kinematics and EMGs were effective when using either multiunit spiking or local field potentials (LFPs) recorded from intracortical arrays. Surprisingly, the signals from epidural arrays were essentially uninformative. Results from somatosensory evoked potentials (SSEPs) confirmed that these arrays recorded neural activity, corroborating our finding that this type of array is unlikely to provide useful information to guide an FES-BMI for rat walking. Significance. We believe that the accuracy of our decoders in predicting EMGs from multiunit spiking activity is sufficient to drive an FES-BMI. Our future goal is to use this rat model to evaluate the potential for cortically-controlled FES to be used to restore locomotion after SCI, as well as its further potential as a rehabilitative technology for improving general motor function.
No abstract available
Spinal cord injury (SCI) impairs the flow of sensory and motor signals between the brain and the areas of the body located below the lesion level. Here, we describe a neurorehabilitation setup combining several approaches that were shown to have a positive effect in patients with SCI: gait training by means of non-invasive, surface functional electrical stimulation (sFES) of the lower-limbs, proprioceptive and tactile feedback, balance control through overground walking and cue-based decoding of cortical motor commands using a brain-machine interface (BMI). The central component of this new approach was the development of a novel muscle stimulation paradigm for step generation using 16 sFES channels taking all sub-phases of physiological gait into account. We also developed a new BMI protocol to identify left and right leg motor imagery that was used to trigger an sFES-generated step movement. Our system was tested and validated with two patients with chronic paraplegia. These patients were able to walk safely with 65–70% body weight support, accumulating a total of 4,580 steps with this setup. We observed cardiovascular improvements and less dependency on walking assistance, but also partial neurological recovery in both patients, with substantial rates of motor improvement for one of them.
No abstract available
Classification of electroencephalogram (EEG) and electrocorticogram (ECoG) signals obtained during motor imagery (MI) has substantial application potential, including for communication assistance and rehabilitation support for patients with motor impairments. These signals remain inherently susceptible to physiological artifacts (e.g., eye blinking, swallowing), which pose persistent challenges. Although Transformer-based approaches for classifying EEG and ECoG signals have been widely adopted, they often struggle to capture fine-grained dependencies within them. To overcome these limitations, we propose Cortical-SSM, a novel architecture that extends deep state space models to capture integrated dependencies of EEG and ECoG signals across temporal, spatial, and frequency domains. We validated our method across three benchmarks: 1) two large-scale public MI EEG datasets containing more than 50 subjects, and 2) a clinical MI ECoG dataset recorded from a patient with amyotrophic lateral sclerosis. Our method outperformed baseline methods on the three benchmarks. Furthermore, visual explanations derived from our model indicate that it effectively captures neurophysiologically relevant regions of both EEG and ECoG signals.
Patients with spinal cord injury (SCI) often face urinary and defecation dysfunction, and existing treatments have limited effectiveness. Brain-computer interface (BCI) technology has been shown to have positive effects on the rehabilitation of SCI patients, but its application in promoting the recovery of urinary and defecation functions has not been explored. This study proposes a new BCI application approach and develops an accurate decoding model targeted at urination and defecation motor attempt tasks. Specifically, we designed a Bidirectional Temporal Convolutional Network (UDCNN-BiTCN) to decode both the suppressed urination and defecation (S-UD) task and the urination and defecation (UD) task. Seventy-one participants (including 44 healthy controls and 27 SCI patients) were recruited for the experiment. The results showed that UDCNN-BiTCN achieved an average accuracy of 91.47% on the S-UD task and 91.81% on the UD task. The study also conducted within-subject cross-task transfer learning and cross-subject experiments, further validating the superiority of the model. In addition, we conducted a comprehensive analysis of this new paradigm from the perspective of classification performance. The research approach and findings in this study provide a valuable new perspective for BCI applications in the recovery of urinary and defecation functions.
In this study, we present a high-density, high-channel-count micro-electrocorticographic (μECoG) electrode array for real-time motor decoding. The 256-channel μECoG electrode arrays, based on MEMS processes, possess excellent flexibility and mechanical robustness, allowing them to conform to the cortical surface and enabling the acquisition of high-quality ECoG signals. The advanced brain-computer interface (BCI) system was applied to a Labrador dog and high accurate real-time motor decoding was achieved, showing the advantages of high-resolution ECoG sampling. Our method demonstrates the potential for controlling a cursor with the ECoG signals, offering the possibility of reconstructing motor functions and synthesizing avatars.
Brain signal decoding combined with spinal cord stimulation have been used in early clinical trials, to restore mobility to paraplegic and tetraplegic patients. Making such systems available for home use requires them to be portable, energy-efficient, and capable of real-time operation. In addition, the decoding model needs to be adaptive to account for the evolution of the brain-machine connection. We present a case study of a Brain-Computer Interface (BCI) that was ported to an embedded platform, resulting in an over $10 \times$ reduction in power consumption, compared to the previous implementation, while maintaining the real-time constraint. The adaptive incremental model update, which preserves a high decoding accuracy over time, was optimized to meet real-time constraints, enabling updates in less than 15 seconds. We explain several algorithmic and implementation techniques that were deployed to reduce the execution time of our C++ embedded implementation by $2 \times$. Furthermore, two different embedded systems were studied and we identified one that was better suited for inference-only use, while the other was better when the model needed to be updated. This case study brings us a step closer to deploying adaptive BCI systems for home use.
In previous studies, Sensory Motor Rhythm (SMR) and Movement-Related Cortical Potential (MRCP) have been proved to be complementary in decoding a variety of motion information. However, no studies have reported whether they are complementary when subjects perform functional lower limb movements. In this work, we investigate the effect of two features or their combination on classifying three functional lower limb movements (standing, walking, sitting) and rest. MRCP features are extracted by Locality Preserving Projection (LPP) and SMR features are extracted by selecting the best frequency-channel pairs through the Bhattacharyya distance. A Support Vector Machine (SVM) classifier was employed to assess the performance of different features or their combination in six binary classification tasks, where three types of lower limb movements are compared with each other or with rest. The combination of two features achieved the highest accuracy in most classification task. In the classification of standing and walking, the combination of these two features has shown significantly better performance (both p < 0.05) than the classifiers using either MRCP or SMR. Our results suggest that MRCP and SMR features are complementary for decoding the functional lower limb movements, which would benefit the Brain-computer Interface (BCI) system for lower limb rehabilitation.
Intra-cortical brain-machine interfaces (iBMIs) present a promising solution to restoring and decoding brain activity lost due to injury. However, patients with such neuroprosthetics suffer from permanent skull openings resulting from the devices’ bulky wiring. This drives the development of wireless iBMIs, which demand low power consumption and small device footprint. Most recently, spiking neural networks (SNNs) have been researched as potential candidates for low-power neural decoding. In this work, we present the next step of utilizing SNNs for such tasks, building on the recently published results of the 2024 Grand Challenge on Neural Decoding Challenge for Motor Control of non-Human Primates. We optimize our model architecture to exceed the existing state of the art on the Primate Reaching dataset while maintaining similar resource demand through various compression techniques. We further focus on implementing a realtime-capable version of the model and discuss the implications of this architecture. With this, we advance one step towards latency-free decoding of cortical spike trains using neuromorphic technology, ultimately improving the lives of millions of paralyzed patients.
A promising technology for facilitating communication and control for people with disabilities is the brain-computer interface (BCI). Electroencephalogram (EEG) signals are frequently used in BCI systems; however, accurate classification of these signals remains challenging. This study introduces a brand-new technique for classifying EEG signals using spectral characteristics. Using pre-movement EEG recordings, the short-time Fourier transform (STFT) and spectral feature extraction are employed to produce an accurate classification method for upper limb dynamic movements. A bigger dataset of healthy people was used to test the suggested method, and other classification algorithms, such as Convolutional Neural Networks (CNN) and Residual Networks (Resnet), were used to assess its performance. The results show that the suggested method consistently produces high accuracy rates for all subjects and movements, with an overall accuracy of 88.7% and the highest accuracy of 100 % achieved on subject 5 during movement 3 using Resnet on a privately available dataset that was compiled from 12 healthy subjects and consisted of 5 types of upper-limb complex premovements that were done in 50 trails. Our study extends the previous work by using a different feature extraction method and classification algorithms on a larger dataset of healthy subjects, outperforming previous methods. Utilizing spectral features, our method could improve the accuracy of BCI systems in various applications, including medical diagnosis, control of assistive devices, and gaming software. Furthermore, this approach could also be extended to other types of signals beyond EEG, enabling accurate classification in a broader range of applications.
Brain-computer interface (BCI) systems can be utilized for kinematics decoding from scalp brain activation to control rehabilitation or power-augmenting devices. In this study, the hand kinematics decoding for grasp and lift task is performed in three-dimensional (3D) space using scalp electroencephalogram (EEG) signals. Twelve subjects from the publicly available database WAY-EEG-GAL, has been utilized in this study. In particular, multi-layer perceptron (MLP) and convolutional neural network-long short-term memory (CNN-LSTM) based deep learning frameworks are proposed that utilize the motor-neural information encoded in the pre-movement EEG data. Spectral features are analyzed for hand kinematics decoding using EEG data filtered in seven frequency ranges. The best performing frequency band spectral features has been considered for further analysis with different EEG window sizes and lag windows. Appropriate lag windows from movement onset, make the approach pre-movement in true sense. Additionally, inter-subject hand trajectory decoding analysis is performed using leave-one-subject-out (LOSO) approach. The Pearson correlation coefficient and hand trajectory are considered as performance metric to evaluate decoding performance for the neural decoders. This study explores the feasibility of inter-subject 3-D hand trajectory decoding using EEG signals only during reach and grasp task, probably for the first time. The results may provide the viable information to decode 3D hand kinematics using pre-movement EEG signals for practical BCI applications such as exoskeleton/exosuit and prosthesis.
Electroencephalogram (EEG) signals, inherently non-stationary and non-linear, present significant challenges in their processing and interpretation. This paper presents a hybrid mode selection approach using two advanced decomposition methods: Empirical Mode Decomposition (EMD) and Variational Mode Decomposition (VMD), to analyze these signals, targeting their application in the classification of upper limb complex movements for enhanced prosthetic limb control and rehabilitation therapy assessment. Using optimized statistical features extracted from selected modes, Intrinsic mode functions (IMFs) via EMD and modes via VMD, we seek to better distinguish neural activities in pre-movement EEG signal. Our methodology involves the following two strategies: straightforward extraction of statistical features from modes yielded by EMD and VMD; a genetic algorithm (GA) feature selection technique to select the most optimal set from these statistical features. These derived features train machine learning (ML) classifiers to differentiate limb movements. The results, derived from proprietary dataset from Aalborg University, Denmark, comprising five distinct upper limb movements, demonstrate the effectiveness of our hybrid approach. The usage of EMD and VMD significantly enhanced the discriminatory power of the extracted features, leading to improved classification performance. Furthermore, our hybrid approach yielded classification accuracies of 93.1 % and 95.6% with EMD and VMD respectively when the K-NN classifier was deployed with a 10-fold cross-validation. K-NN classifier outperformed traditional ML classifiers in terms of computational time, highlighting its potential as lightweight yet robust algorithm for classification of complex movements. The primary goal is to present and validate a hybrid mode (IMFs/modes) selection approach through EMD and VMD to analyze EEG signals associated with upper limb complex movements.
BACKGROUND AND OBJECTIVE Decoding functional movements from electroencephalographic (EEG) activity for motor disability rehabilitation is essential to develop home-use brain-computer interface systems. In this paper, the classification of five complex functional upper limb movements is studied by using only the pre-movement planning and preparation recordings of EEG data. METHODS Nine healthy volunteers performed five different upper limb movements. Different frequency bands of the EEG signal are extracted by the stationary wavelet transform. Common spatial patterns are used as spatial filters to enhance separation of the five movements in each frequency band. In order to increase the efficiency of the system, a mutual information-based feature selection algorithm is applied. The selected features are classified using the k-nearest neighbor, support vector machine, and linear discriminant analysis methods. RESULTS K-nearest neighbor method outperformed the other classifiers and resulted in an average classification accuracy of 94.0 ± 2.7% for five classes of movements across subjects. Further analysis of each frequency band's contribution in the optimal feature set, showed that the gamma and beta frequency bands had the most contribution in the classification. To reduce the complexity of the EEG recording system setup, we selected a subset of the 10 most effective EEG channels from 64 channels, by which we could reach an accuracy of 70%. Those EEG channels were mostly distributed over the prefrontal and frontal areas. CONCLUSIONS Overall, the results indicate that it is possible to classify complex movements before the movement onset by using spatially selected EEG data.
Individuals with severe tetraplegia can benefit from brain-computer interfaces (BCIs). While most movement-related BCI systems focus on right/left hand and/or foot movements, very few studies have considered tongue movements to construct a multiclass BCI. The aim of this study was to decode four movement directions of the tongue (left, right, up, and down) from single-trial pre-movement EEG and provide a feature and classifier investigation. In offline analyses (from ten individuals without a disability) detection and classification were performed using temporal, spectral, entropy, and template features classified using either a linear discriminative analysis, support vector machine, random forest or multilayer perceptron classifiers. Besides the 4-class classification scenario, all possible 3-, and 2-class scenarios were tested to find the most discriminable movement type. The linear discriminant analysis achieved on average, higher classification accuracies for both movement detection and classification. The right- and down tongue movements provided the highest and lowest detection accuracy (95.3±4.3% and 91.7±4.8%), respectively. The 4-class classification achieved an accuracy of 62.6±7.2%, while the best 3-class classification (using left, right, and up movements) and 2-class classification (using left and right movements) achieved an accuracy of 75.6±8.4% and 87.7±8.0%, respectively. Using only a combination of the temporal and template feature groups provided further classification accuracy improvements. Presumably, this is because these feature groups utilize the movement-related cortical potentials, which are noticeably different on the left- versus right brain hemisphere for the different movements. This study shows that the cortical representation of the tongue is useful for extracting control signals for multi-class movement detection BCIs.
High-performance prosthetic and exoskeleton systems based on EEG signals can improve the quality of life of hand-impaired people. Effective controlling of these assistive devices requires accurate EEG signal classification. Although there have been advancements in the assistive Brain-Computer Interface (BCI) systems, still classifying the EEG signals with high accuracy is a great challenge. The objective of this research is to investigate the accuracy of the EEG signal classification of the Spiking Neural Network (SNN) classifier for factual and exact control of prosthetic and exoskeleton systems for individuals with hand impairment. The EEG dataset has been taken from the BNCI Horizon 2020 website, which is for hand movement-relax events of a patient with high spinal cord injury (SCI) to operate a neuro-prosthetic device attached to the paralyzed right upper limb. The fusion of Dispersion Entropy (DE), Fuzzy Entropy (FE), and Fluctuation based Dispersion Entropy (FDE) with mean and skewness features are extracted from the Motor Imagery (MI) EEG signals and applied to the Spiking Neural Network (SNN) classifier. To compare the performance of this algorithm, these same features have been used in Convolutional Neural Network (CNN), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR) classifiers. It has been found that SNN has given the highest classification accuracy of 80% with a precision of 80.95%, recall of 77.28%, and F1-score of 79.07%. This indicates that SNN with these five features has greater potential in BCI system-based applications.
Brain-machine interfaces (BMIs), particularly those based on electroencephalography (EEG), offer promising solutions for assisting individuals with motor disabilities. However, challenges in reliably interpreting EEG signals for specific tasks, such as simulating keystrokes, persist due to the complexity and variability of brain activity. Current EEG-based BMIs face limitations in adaptability, usability, and robustness, especially in applications like virtual keyboards, as traditional machine-learning models struggle to handle high-dimensional EEG data effectively. To address these gaps, we developed an EEG-based BMI system capable of accurately identifying voluntary keystrokes, specifically leveraging right and left voluntary hand movements. Using a publicly available EEG dataset, the signals were pre-processed with band-pass filtering, segmented into 22-electrode arrays, and refined into event-related potential (ERP) windows, resulting in a 19x200 feature array categorized into three classes: resting state (0), 'd' key press (1), and 'l' key press (2). Our approach employs a hybrid neural network architecture with BiGRU-Attention as the proposed model for interpreting EEG signals, achieving superior test accuracy of 90% and a mean accuracy of 91% in 10-fold stratified cross-validation. This performance outperforms traditional ML methods like Support Vector Machines (SVMs) and Naive Bayes, as well as advanced architectures such as Transformers, CNN-Transformer hybrids, and EEGNet. Finally, the BiGRU-Attention model is integrated into a real-time graphical user interface (GUI) to simulate and predict keystrokes from brain activity. Our work demonstrates how deep learning can advance EEG-based BMI systems by addressing the challenges of signal interpretation and classification.
Brain-computer interface (BCI) technology enables communication between humans and devices by reflecting users' status and intentions. Electroencephalography (EEG) signals are utilized to capture brain electrical activity with no surgical operation. When conducting motor imagery (MI), one of the endogenous BCI paradigms, the users imagine the movement of muscles used when performing a certain movement without actual physical movement. However, not all subjects show outstanding classification performance in decoding MI-based EEG signals. We propose the novel method that utilizes the weights of the pre-trained model to generate the personalized weights, effectively combining the general MI features with the personalized features. We used the 5-fold cross-validation for evaluating the performances, and conducted the experiments in 3 different pre-trained models (Top-3, Top-5, and Top-7). We compared the performances of our proposed method using the baseline and the full fine-tuning. In comparison to our proposed method with the baseline, our proposed method achieved the improvement of the average accuracies in all pre-trained models, and those values were 0.123, 0.138, and 0.143, respectively. When comparing our proposed method with the full fine-tuning, the average accuracies of our proposed method were the highest in all pre-trained models, and the differences in the average accuracies were 0.012, 0.019, and 0.009, respectively. Hence, we demonstrated the possibility of improving the precision and effectiveness of the EEG-based systems by reflecting the individual differences in EEG signals among the subjects with low classification accuracy.
This study presents a preliminary investigation into the feasibility of controlling a prosthetic arm using electroencephalography (EEG) signals. EEG data was collected and pre-processed to isolate relevant brain activity associated with hand movements. Basic feature extraction techniques were applied to quantify these patterns, followed by a simple classification algorithm to distinguish between hand open and closed states. While demonstrating the potential of EEG-based control, this research highlights the need for advanced signal processing, robust feature engineering, and sophisticated machine learning models to achieve accurate and reliable prosthetic control.
Bimanual coordination is important for developing a natural motor brain-computer interface (BCI) from electroencephalogram (EEG) signals, covering the aspects of bilateral arm training for rehabilitation, bimanual coordination for daily-life assistance, and also improving the multidimensional control of BCIs. For the same task targets of both hands, simultaneous and sequential bimanual movements are two different bimanual coordination manners. Planning and performing motor sequences are the fundamental abilities of humans, and it is more natural to execute sequential movements compared to simultaneous movements in many complex tasks. However, to date, for these two different manners in which two hands coordinated to reach the same task targets, the differences in the neural correlate and also the feasibility of movement discrimination have not been explored. In this study, we aimed to investigate these two issues based on a bimanual reaching task for the first time. Finally, neural correlates in the view of the movement-related cortical potentials, event-related oscillations, and source imaging showed unique neural encoding patterns of sequential movements. Besides, for the same task targets of both hands, the simultaneous and sequential bimanual movements were successfully discriminated in both pre-movement and movement execution periods. This study revealed the neural encoding patterns of sequential bimanual movements and presented its values in developing a more natural and good-performance motor BCI.
Continuous decoding of hand kinematics has been recently explored for the intuitive control of electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs). Deep neural networks (DNNs) are emerging as powerful decoders, for their ability to automatically learn features from lightly pre-processed signals. However, DNNs for kinematics decoding lack in the interpretability of the learned features and are only used to realize within-subject decoders without testing other training approaches potentially beneficial for reducing calibration time, such as transfer learning. Here, we aim to overcome these limitations by using an interpretable convolutional neural network (ICNN) to decode 2-D hand kinematics (position and velocity) from EEG in a pursuit tracking task performed by 13 participants. The ICNN is trained using both within-subject and cross-subject strategies, and also testing the feasibility of transferring the knowledge learned on other subjects on a new one. Moreover, the network eases the interpretation of learned spectral and spatial EEG features. Our ICNN outperformed most of the other state-of-the-art decoders, showing the best trade-off between performance, size, and training time. Furthermore, transfer learning improved kinematics prediction in the low data regime. The network attributed the highest relevance for decoding to the delta-band across all subjects, and to higher frequencies (alpha, beta, low-gamma) for a cluster of them; contralateral central and parieto-occipital sites were the most relevant, reflecting the involvement of sensorimotor, visual and visuo-motor processing. The approach improved the quality of kinematics prediction from the EEG, at the same time allowing interpretation of the most relevant spectral and spatial features.
Recently, the advent of the non-invasive brain-computer interface (BCI) for continuous decoding of upper limb motions opens a new horizon for motor-disabled people. However, the performance of discrete-decoding BCIs based on discriminating different brain states are still more robust. In this study, we aimed to cascade a discrete state decoder with a continuous decoder to enhance the prediction of hand trajectories. EEG data were recorded from nine healthy subjects performing a center-out task with four orthogonal targets on the horizontal plane. The pre-movement data of each trial has been used for training a binary discrete decoder which identifies the axis of the movement based on common spatial pattern (CSP) features. Two non-parametric continuous decoders based on Gaussian process regression (GPR) have been designed for continuous decoding of hand movements along each axis using the envelope features of EEG signals in six frequency bands. In addition to those four principal orthogonal targets, some targets at random directions on the horizontal plane were recorded to evaluate the generalizability of the proposed model. The discrete decoder attained the average binary classification of 97.1% for discriminating movement along the x-axis and y-axis. The proposed state-based method achieved the mean correlation coefficient of 0.54 between actual and predicted trajectories for principal targets over all subjects. The trajectories of random targets were also decoded with a mean correlation of 0.37. The generalizability of the proposed paradigm proved by the findings of this study could open new possibilities in developing novel types of neuroprostheses for rehabilitation purposes.
No abstract available
For prosthetic limb control and rehabilitation training of disabilities, it is important to use electroencephalography (EEG) to recognize different hand movements to assist the disabilities. In this paper, we proposed a novel method by combining multivariate empirical mode decomposition (MEMD) and common spatial pattern (CSP) to extract EEG features, and achieved the prediction of hand movement. Thirty-channel EEG signals and four-channel EMG signals were acquired during the experiment, and the EEG signals were captured one second prior to the beginning of detected hand movement based on the surface electromyography (EMG) signals. MEMD was applied to decomposing the pre-processed EEG signals into several multivariate intrinsic mode functions (IMFs) and CSP was used to extract the features of IMFs. Then, the principal component analysis (PCA) was used to reduce the feature dimension. In the end, six one-versus-one support vector machines were applied to classify the EEG signals. Ten subjects participated in this experiment consisting of four types of hand movements. EEG signals were divided into a training set and a test set by five-fold cross-validation. The average classification accuracy was regarded as the final results. The optimal single IMF and combination IMFs for classification were analyzed in this study. The results showed that the proposed method had a good performance in predicting the upcoming hand movements by classifying the signals prior to the detected hand movement. The combination of IMF1, IMF2, and IMF3 revealed the highest average classification accuracy of 82.67%, and the average kappa coefficient was 0.77, which indicated the predicted results were highly consistent with the actual results. It indicates that the proposed method combining MEMD and CSP is suitable for predicting different types of hand movements.
This work presents two brain-computer interfaces (BCIs) for shoulder pre-movement recognition using: 1) manual strategy for Electroencephalography (EEG) channels selection, and 2) subject-specific channels selection by applying non-negative factorization matrix (NMF). Besides, the proposed BCIs compute spatial features extracted from filtered EEG signals through Riemannian covariance matrices and a linear discriminant analysis (LDA) to discriminate both shoulder pre-movement and rest states. We studied on twenty-one healthy subjects different frequency ranges looking the best frequency band for shoulder pre-movement recognition. As a result, our BCI located automatically EEG channels on the contralateral moved limb, and enhancing the pre-movement recognition (ACC = 71.39 ± 12.68%, κ = 0.43 ± 0.25%). The ability of the proposed BCIs to select specific EEG locations more cortically related to the moved limb could benefit the neuro-rehabilitation process.
For brain-computer interfaces, resolving the differences between pre-movement and movement requires decoding neural ensemble activity in the motor cortex’s functional regions and behavioural patterns. Here, we explored the underlying neural activity and mechanisms concerning a grasped motor task by recording electroencephalography (EEG) signals during the execution of hand movements in healthy subjects. The grasped movement included different tasks; reaching the target, grasping the target, lifting the object upwards, and moving the object in the left or right directions. 163 trials of EEG data were acquired from 30 healthy participants who performed the grasped movement tasks. Rhythmic EEG activity was analysed during the premovement (alert task) condition and compared against grasped movement tasks while the arm was moved towards the left or right directions. The short positive to negative deflection that initiated around -0.5ms as a wave before the onset of movement cue can be used as a potential biomarker to differentiate movement initiation and movement. A rebound increment of 14% of beta oscillations and 26% gamma oscillations in the central regions was observed and could be used to distinguish pre-movement and grasped movement tasks. Comparing movement initiation to grasp showed a decrease of 10% in beta oscillations and 13% in gamma oscillations, and there was a rebound increment 4% beta and 3% gamma from grasp to grasped movement. We also investigated the combination MRCPs and spectral estimates of α, β, and γ oscillations as features for machine learning classifiers that could categorize movement conditions. Support vector machines with 3rd order polynomial kernel yielded 70% accuracy. Pruning the ranked features to 5 leaf nodes reduced the error rate by 16%. For decoding grasped movement and in the context of BCI applications, this study identifies potential biomarkers, including the spatio-temporal characteristics of MRCPs, spectral information, and choice of classifiers for optimally distinguishing initiation and grasped movement.
No abstract available
For years now, phase-amplitude cross frequency coupling (CFC) has been observed across multiple brain regions under different physiological and pathological conditions. It has been suggested that CFC serves as a mechanism that facilitates communication and information transfer between local and spatially separated neuronal populations. In non-invasive brain computer interfaces (BCI), CFC has not been thoroughly explored. In this work, we propose a CFC estimation method based on Linear Parameter Varying Autoregressive (LPV-AR) models and we assess its performance using both synthetic data and electroencephalographic (EEG) data recorded during attempted arm/hand movements of spinal cord injured (SCI) participants. Our results corroborate the potentiality of CFC as a feature for movement attempt decoding and provide evidence of the superiority of our proposed CFC estimation approach compared to other commonly used techniques.
Brain–Machine Interfaces (BMIs) have made significant progress in recent years; however, there are still several application areas in which improvement is needed, including the accurate prediction of body movement during Virtual Reality (VR) simulations. To achieve a high level of immersion in VR sessions, it is important to have bidirectional interaction, which is typically achieved through the use of movement-tracking devices, such as controllers and body sensors. However, it may be possible to eliminate the need for these external tracking devices by directly acquiring movement information from the motor cortex via electroencephalography (EEG) recordings. This could potentially lead to more seamless and immersive VR experiences. There have been numerous studies that have investigated EEG recordings during movement. While the majority of these studies have focused on movement prediction based on brain signals, a smaller number of them have focused on how to utilize them during VR simulations. This suggests that there is still a need for further research in this area in order to fully understand the potential for using EEG to predict movement in VR simulations. We propose two neural network decoders designed to predict pre-arm-movement and during-arm-movement behavior based on brain activity recorded during the execution of VR simulation tasks in this research. For both decoders, we employ a Long Short-Term Memory model. The study’s findings are highly encouraging, lending credence to the premise that this technology has the ability to replace external tracking devices.
Brain-computer interfaces (BCIs) establish a communication pathway between the human brain and external devices by decoding neural signals. This study focuses on enhancing the classification of Motor Imagery (MI) within BCI systems by leveraging advanced machine learning and deep learning techniques. The accurate classification of electroencephalogram (EEG) data is crucial for enhancing BCI performance. The BCI architecture processes electroencephalography signals through three critical stages: data pre-processing, feature extraction, and classification. The research evaluates the performance of five traditional machine learning classifiers- K-Nearest Neighbors (KNN), Support Vector Classifier (SVC), Logistic Regression (LR), Random Forest (RF), and Naive Bayes (NB)-using the “PhysioNet EEG Motor Movement/Imagery Dataset”. This dataset encompasses EEG data from various motor tasks, including both actual and imagined movements. Among the traditional classifiers, Random Forest achieved the highest accuracy of 91%, underscoring its efficacy in motor imagery classification within BCI systems. In addition to conventional approaches, the study also explores deep learning techniques, with Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks yielding accuracies of 88.18% and 16.13%, respectively. However, the proposed hybrid model, which synergistically combines CNN and LSTM, significantly surpasses both traditional machine learning and individual deep learning methods, achieving an exceptional accuracy of 96.06%. This substantial improvement highlights the potential of hybrid deep learning models to advance the state of the art in BCI systems, offering a more robust and precise approach to motor imagery classification.
The importance of using Brain-Computer Interface (BCI) systems based on electro encephalography (EEG) signal to decode Motor Imagery(MI) is very impressive because of the possibility of analyzing and translating brain signals related to movement intentions. This technology has many applications in the fields of medicine, rehabilitation, mind-controlled computers and assistive technologies. Despite significant progress in EEG-based BCI systems, there are challenges such as signal noise, low decoding accuracy, instability and changeability of signals, etc. To address these limitations, this article presents a new approach to classify MI from EEG signals with the help of synergistic Hilbert-Huang Transform(HHT) as pre-processing, Permutation Conditional Mutual Information Common Space Pattern (PCMICSP) as features and optimized back propagation neural network(BPNN) based on Honey Badger Algorithm(HBA) as classifier. Using the ergodicity of the HBA, along with chaotic mechanisms and global convergence, this approach encodes and optimizes the weights and thresholds of a BPNN. Initially, a comprehensive optimal solution is obtained through the honey badger algorithm. Subsequently, this solution is further refined to reach a more precise optimal state by introducing chaotic disturbances. The proposed method efficiency was confirmed through experimental analysis on a set of data of benchmark that is generally accessible of EEGMMIDB (imagery database or motor movement of EEG). Our experimental analysis outcome showed that mechanism development is important. Now, two EEG signal levels were taken into consideration: the first being an epileptic and the other being non-epileptic. The presented technique generated a max accuracy of 89.82% in comparison with other methods.
Nowadays, familiarity with how the brain works and the commands issued by it, has attracted the attention of researchers in various sciences. Advances in this field and the growing knowledge about the neural correlates of the commands issued by the human brain have opened new horizons to socio-cognitive robotics. Research has shown that electroencephalogram (EEG) signals can detect electrical activity in the brain. EEG signals contain useful information about brain function and its responses to various phenomena that can be interpreted and sent as control commands. In this paper, an attempt has been made to design and build a mechanical arm that can be guided by moving as well as imagining the movement of the right and left arms. For this purpose, brain signals were taken using an EEG cap. The recorded signals are pre-processed and filtered by the EEGLAB toolbox. The statistical features of these signals were extracted, their number was reduced using the t-test method and the best set was selected by the sequential forward feature selection (SFFS) method. The LDA classifier is then used to classify signals into two classes, the right hand, and the left hand. Motor activities and motor imagery were detected by this algorithm with an accuracy of over 95% and above 90%, respectively. Finally, a mechanical arm with three degrees of freedom was designed and built and the results of the machine learning algorithm were implemented on it.
This paper deploys movement-related cortical potential (MRCP), an electroencephalogram (EEG)-derived time-domain pattern, to assess the effect of robot-assisted motor training in seven post-stroke patients with hand impairment. Patients are divided into two groups of four subjects with supratentorial lesions and a group of three subjects with infratentorial lesions. Both groups participate in multiple-session motor training for their affected hand with an AMADEO rehabilitation robot. During pre- and post-training periods, three assessment procedures which include EEG signals derived from eight specific electrodes, hand-kinematic parameters, and clinical tests are performed. After four weeks of training, the negative peak of the MRCP signals shows a decrease across all electrodes and reaches significance in seven out of the eight electrodes for the first group according to paired t-test ( $p < 0.05$ ). Whereas for the second group, the MRCP signal shows a decrease in its negative peak across all electrodes and reaches significance in two of the eight electrodes (paired t-test, $p < 0.05$ ) after eight weeks. Moreover, these MRCP changes show a positive association with improvements in kinematic parameters and clinical test results for both groups. Hence, this study shows that improvement of clinical outcomes in robot-assisted training is associated with a reduction in the amplitude of the MRCP signal. Furthermore, infratentorial stroke patients show a slower clinical improvement and require longer rehabilitation to produce significant changes in MRCP compared to subjects with supratentorial stroke.
最终分组结果呈现了从“底层神经机制建模”到“高级AI解码算法”,再到“特定肢体功能重建”与“临床闭环康复系统”的完整技术路径。研究核心趋势表现为:1) 脑-脊髓接口(BSI)技术的突破,实现了绕过损伤部位的直接神经驱动;2) 深度学习与类脑计算(SNN)的引入极大提升了复杂意图识别的精度与实时性;3) 多模态融合与VR/电刺激的结合,使康复手段从单一训练转向主动神经重塑,显著增强了临床实用性与患者的运动功能恢复效果。