视觉脑机接口
信号处理算法优化与特征识别效率
该组文献集中于提升SSVEP信号的检测性能。研究涵盖了从多变量迭代滤波(MIF)、自适应空间滤波、多变量模态分解(MVMD)到3D卷积神经网络等前沿算法,旨在解决信号鲁棒性、动态窗口检测以及高信息传输率(ITR)等核心挑战。
- Multivariate Iterative Filtering-Based SSVEP Detection in Mobile Environment for Brain–Computer Interface Application(Kritiprasanna Das, R. B. Pachori, 2024, IEEE Sensors Letters)
- 3D Input Convolutional Neural Network for SSVEP Classification in Design of Brain Computer Interface for Patient User(Z. Oralhan, Burcu Oralhan, M. Khayyat, S. Abdel-Khalek, R. Mansour, 2022, Computational and Mathematical Methods in Medicine)
- Performance investigation of MVMD-MSI algorithm in frequency recognition for SSVEP-based brain-computer interface and its application in robotic arm control(Rongrong Fu, Shaoxiong Niu, Xiaolei Feng, Ye Shi, Chengcheng Jia, Jing Zhao, Guilin Wen, 2024, Medical & Biological Engineering & Computing)
- Space-time filter for SSVEP brain-computer interface based on the minimum variance distortionless response(S. N. Carvalho, G. Vargas, Thiago Bulhões da Silva Costa, Harlei Miguel de Arruda Leite, L. Coradine, Levy Boccato, D. Soriano, R. Attux, 2021, Medical & Biological Engineering & Computing)
- An online multi-channel SSVEP-based brain–computer interface using a canonical correlation analysis method(Guangyu Bin, Xiaorong Gao, Zheng Yan, Bo Hong, Shangkai Gao, 2009, Journal of Neural Engineering)
- Compressive sensing applied to SSVEP-based brain-computer interface in the cloud for online control of a virtual wheelchair(Hamilton Rivera-Flor, C. D. Guerrero-Méndez, K. A. Hernandez-Ossa, Denis Delisle Rodríguez, Ricardo C. de Mello, T. B. F. Filho, 2024, Biomedical Signal Processing and Control)
- Adaptive Spatial Filtering-based Component Exploration model for SSVEP-based Brain-Computer Interface for target identification(K. R. Swetha, R. K., S. V, 2023, Multimedia Tools and Applications)
- Attention-Focused Triggering Strategy for Dynamic Classification in SSVEP-Based Brain–Computer Interface(Tao Ding, Xingwei Zhao, Qing Ling, Zhouping Tang, Bo Tao, Han Ding, 2024, IEEE Transactions on Instrumentation and Measurement)
视觉刺激范式创新与多模态混合系统
该组研究侧重于诱发端的改进与多模态集成。通过引入双频编码、颜色/形状特征优化刺激界面,或结合P300、眼动追踪、RSVP、运动幻觉(IVEP)及力反馈技术,旨在扩大指令空间、降低视觉疲劳并增强系统在复杂任务中的准确性。
- An Improved SSVEP-based Brain-Computer Interface with Low Contrast Visual Stimulation and its Application in UAV Control.(Yu Cheng, Lirong Yan, Muhammad Usman Shoukat, Jingyang She, Wenjiang Liu, Changcheng Shi, Yibo Wu, Fuwu Yan, 2024, Journal of Neurophysiology)
- Development of Flicker Visual Stimulus by Mixing Fundamental and Its Harmonie Frequencies for SSVEP-based Brain-Computer Interface(Nannaphat Siribunyaphat, Yunyong Punsawad, Y. Wongsawat, 2021, 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON))
- 文字镜像加工的理论模型与神经机制 - 期刊(Unknown Authors, Unknown Journal)
- Efficient dual-frequency SSVEP brain-computer interface system exploiting interocular visual resource disparities(Yike Sun, Yuhan Li, Yuzhen Chen, Chen Yang, Jingnan Sun, Liyan Liang, Xiaogang Chen, Xiaorong Gao, 2024, Expert Systems with Applications)
- A Hybrid Speller Design Using Eye Tracking and SSVEP Brain–Computer Interface(M. M. N. Mannan, M. A. Kamran, Shinil Kang, H. Choi, M. Jeong, 2020, Sensors)
- A hybrid P300-SSVEP brain-computer interface speller with a frequency enhanced row and column paradigm(X. Bai, Minglun Li, Shouliang Qi, Anna Ching Mei Ng, Tit Ng, Wei Qian, 2023, Frontiers in Neuroscience)
- A Novel Hybrid Brain–Computer Interface Combining the Illusion-Induced VEP and SSVEP(Ruxue Li, Xi Zhao, Zhenyu Wang, Guiying Xu, Honglin Hu, Ting Zhou, Tianheng Xu, 2023, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Investigation of Color and Shape Stimulus Configuration to SSVEP Brain-Computer Interface Performance(Salman Al Maghribi Suwandi, Ayumi Ohnishi, Tsutomu Terada, M. Tsukamoto, 2024, 2024 9th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS))
- Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items(Arkadiusz Kubacki, 2021, Sensors)
- Application of Hybrid SSVEP + P300 Brain Computer Interface to Control Avatar Movement in Mobile Virtual Reality Gaming Environment.(Deepak D. Kapgate, 2024, Behavioural Brain Research)
- SSVEP-assisted RSVP brain–computer interface paradigm for multi-target classification(L. Ko, Sandeep Vara Sankar D, Yufei Huang, Yun-Chen Lu, Siddharth Shaw, T. Jung, 2020, Journal of Neural Engineering)
医疗康复辅助与特殊人群通信技术
此类研究直接面向残障人士(如中风、ALS患者)及老年群体,开发了包括软体康复手套、智能轮椅、辅助拼写器原型以及近方视力检测等临床应用,体现了BCI技术在医疗领域的实用价值与社会意义。
- An SSVEP-Based Brain Computer Interface Prototype for Assisted Living(Raheeq Darweesh, Dustin Cuscino, A. Geronimo, N. Elaraby, 2024, 2024 IEEE International Conference on Electro Information Technology (eIT))
- An SSVEP-Based Brain–Computer Interface Device for Wheelchair Control Integrated with a Speech Aid System(Abdulrahman Mohammed Alnour Ahmed, Yousef Al-Junaidi, Abdulaziz Al-Tayar, Ammar Qaid, K. Qureshi, 2025, Eng)
- SSVEP-Based Brain Computer Interface Controlled Soft Robotic Glove for Post-Stroke Hand Function Rehabilitation(Ning Guo, Xiaojun Wang, Dehao Duanmu, Xin Huang, Xiaodong Li, Yunli Fan, Hailan Li, Yongquan Liu, E. Yeung, M. To, Jianxiong Gu, Feng Wan, Yong Hu, 2022, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- 脑波应用于老视眼检查之研究 - 汉斯出版社(Unknown Authors, Unknown Journal)
智能家居、IoT与适老化辅助生活
该组文献关注SSVEP-BCI在日常生活环境中的落地,通过集成AR、无线传输和物联网技术实现家居控制、音乐创作及光标定位。研究特别强调了针对老年人的便携式、低功耗及在线化系统设计。
- Development of an Online Home Appliance Control System Using Augmented Reality and an SSVEP-Based Brain–Computer Interface(S. Park, Ho-Seung Cha, C. Im, 2019, IEEE Access)
- Brain-Controlled, AR-Based Home Automation System Using SSVEP-Based Brain-Computer Interface and EOG-Based Eye Tracker: A Feasibility Study for the Elderly End User(S. Park, Jisoo Ha, Jimin Park, Kyeong-wan Lee, Chang-Hwan Im, 2022, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- A Wireless Multifunctional SSVEP-Based Brain–Computer Interface Assistive System(Chin-Teng Lin, Ching-Yu Chiu, Avinash Kumar Singh, Jung-Tai King, L. Ko, Yun-Chen Lu, Yu-kai Wang, 2019, IEEE Transactions on Cognitive and Developmental Systems)
- Development of an Online Home Appliance Control System for the Elderly Based on SSVEP-Based Brain-Computer Interface: A Feasibility Study(S. Park, Ji-min Ha, Ho-Seung Cha, Kyeong-Gu Lee, C. Im, 2021, 2021 9th International Winter Conference on Brain-Computer Interface (BCI))
- SSVEP-based brain–computer interface for music using a low-density EEG system(S. Venkatesh, E. Miranda, Edward Braund, 2022, Assistive Technology)
- Effective 2-D cursor control system using hybrid SSVEP + P300 visual brain computer interface(Deepak Kapgate, 2022, Medical & Biological Engineering & Computing)
- A Brain-Computer Interface Augmented Reality Framework with Auto-Adaptive SSVEP Recognition(Yasmine Mustafa, Mohamed Elmahallawy, Tie-Mei Luo, Seif Eldawlatly, 2023, 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE))
机器人、无人机与高实时性移动设备操控
该组研究探索了在高动态环境下的应用,如无人机3D导航、零件分拣机器人、墙壁清洗机器人及车载环境控制。强调了人机协作(HAT)、自主协同算法以及利用HUD等技术在复杂交互场景中的实时性与安全性。
- SSVEP-based brain-computer interface for bidirectional human-vehicle interaction(Vasiliy A. Mironov, D. Bolshakov, V. Gulyaev, A. Komarov, N. Uslugin, Petr Kazarin, S. Zhukov, August Li, R. Salikhov, V. Kazantsev, 2021, 2021 Third International Conference Neurotechnologies and Neurointerfaces (CNN))
- Mind Controlled Drone: An Innovative Multiclass SSVEP based Brain Computer Interface(Andrei Chiuzbaian, J. Jakobsen, S. Puthusserypady, 2019, 2019 7th International Winter Conference on Brain-Computer Interface (BCI))
- In-Car Environment Control Using an SSVEP-Based Brain-Computer Interface with Visual Stimuli Presented on Head-Up Display: Performance Comparison with a Button-Press Interface(S. Park, Minsu Kim, Hyerin Nam, Jinuk Kwon, Chang-Hwan Im, 2024, Sensors)
- Efficient Quadcopter Flight Control Using Hybrid SSVEP + P300 Visual Brain Computer Interface(Deepak Kapgate, 2021, International Journal of Human–Computer Interaction)
- SSVEP-Based Brain-Computer Interface for Part-Picking Robotic Co-Worker(Yao Li, T. Kesavadas, 2021, Journal of Computing and Information Science in Engineering)
- SSVEP-Based Brain-Computer Interface Controlled Robotic Platform With Velocity Modulation(Yue Zhang, Kun Qian, Shengquan Xie, Chaoyang Shi, Jun Li, Zhi-Li Zhang, 2023, IEEE Transactions on Neural Systems and Rehabilitation Engineering)
- Bootstrapping Human-Autonomy Collaborations by using Brain-Computer Interface of SSVEP for Multi-Agent Deep Reinforcement Learning(Joshua Ho, Chien-Min Wang, Chun-Hsiang Chuang, C. King, Chi-Wei Feng, Tungshan Chou, Yen-Min Chen, Yuhong Yang, Yi-Cheng Hsiao, 2022, 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS))
- EEG-Controlled Wall-Crawling Cleaning Robot Using SSVEP-Based Brain-Computer Interface(Lei Shao, Long Zhang, Abdelkader Nasreddine Belkacem, Yiming Zhang, Xiaoqi Chen, Ji Li, Hongli Liu, 2020, Journal of Healthcare Engineering)
- Quadcopter Control in Three-Dimensional Space Using SSVEP and Motor Imagery-Based Brain-Computer Interface(Devaj Parikh, K. George, 2020, 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON))
基础数据集、硬件评价与用户体验评估
该组文献提供了BCI系统发展的底层支撑,包括构建大规模开源数据集、评估长时间使用带来的视觉疲劳,以及探索简易化/单通道硬件的有效性。
- Dual-Alpha: a large EEG study for dual-frequency SSVEP brain–computer interface(Yike Sun, Liyan Liang, Yuhan Li, Xiaogang Chen, Xiaorong Gao, 2024, GigaScience)
- Assessment of visual fatigue in SSVEP-based brain-computer interface: a comprehensive study(Pablo F. Diez, Lorena Orosco, Agustina Garcés Correa, Luciano Carmona, 2024, Medical & Biological Engineering & Computing)
- A dataset of EEG signals from a single-channel SSVEP-based brain computer interface(G. Acampora, P. Trinchese, A. Vitiello, 2021, Data in Brief)
最终合并的分组全面展现了视觉脑机接口(SSVEP-BCI)从基础理论到工程应用的演进。研究不仅在后端算法(深度学习、自适应滤波)和前端诱发范式(双频编码、多模态融合)上持续突破,以提升ITR和用户舒适度;更在应用层面实现了跨越式发展,覆盖了从医疗康复、老年人居家辅助到工业协作、无人机及智能驾驶等高实时、复杂交互领域。同时,数据集的开源与疲劳评价体系的建立,标志着该领域正从实验室研究向标准化、产业化的阶段迁移。
总计42篇相关文献
摘要: 受测者配戴远方视力最佳矫正度数眼镜阅读近方视标,验光师量测脑波讯号做为判定老视眼,并进一步搭配试镜架检查,完成脑波应用于近方视力量测技术的建立。
镜像加工指个体对原始刺激和其镜像进行视觉辨别的能力。研究者已经提出一些镜像加工的理论模型,如跨通道协调与合作模型,抑制加工模型,以及视空间转换加工模型等。
This paper presents a brain–computer interface (BCI) system based on steady-state visual evoked potential (SSVEP) for controlling an electric wheelchair integrated with a speech aid module. The system targets individuals with severe motor disabilities, such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis (MS), who may experience limited mobility and speech impairments. EEG signals from the occipital lobe are recorded using wet electrodes and classified using deep learning models, including ResNet50, InceptionV4, and VGG16, as well as Canonical Correlation Analysis (CCA). The ResNet50 model demonstrated the best performance for nine-class SSVEP signal classification, achieving an offline accuracy of 81.25% and a real-time performance of 72.44%, thereby clarifying that these results correspond to SSVEP-based analysis rather than motor imagery. The classified outputs are used to trigger predefined wheelchair movements and vocal commands using an Arduino-controlled system. The prototype was successfully implemented and verified through experimental evaluation, demonstrating promising results for mobility and communication assistance.
No abstract available
INTRODUCTION This research evaluated the feasibility of a hybrid SSVEP + P300 brain computer interface (BCI) for controlling the movement of an avatar in a virtual reality (VR) gaming environment (VR + BCI). Existing VR + BCI gaming environments have limitations, such as visual fatigue, a lower communication rate, minimum accuracy, and poor system comfort. Hence, there is a need for an optimized hybrid BCI system that can simultaneously evoke the strongest P300 and SSVEP potentials in the cortex. METHODS A BCI headset was coupled with a VR headset to generate a VR + BCI environment. The author developed a VR game in which the avatar's movement is controlled using the user's cortical responses with the help of a BCI headset. Specifically designed visual stimuli were used in the proposed system to elicit the strongest possible responses from the user's brain. The proposed system also includes an auditory feedback mechanism to facilitate precise avatar movement. RESULTS AND CONCLUSIONS Conventional P300 BCI and SSVEP BCI were also used to control the movements of the avatar, and their performance metrics were compared to those of the proposed system. The results demonstrated that the hybrid SSVEP + P300 BCI system was superior to the other systems for controlling avatar movement.
Abstract Background The domain of brain–computer interface (BCI) technology has experienced significant expansion in recent years. However, the field continues to face a pivotal challenge due to the dearth of high-quality datasets. This lack of robust datasets serves as a bottleneck, constraining the progression of algorithmic innovations and, by extension, the maturation of the BCI field. Findings This study details the acquisition and compilation of electroencephalogram data across 3 distinct dual-frequency steady-state visual evoked potential (SSVEP) paradigms, encompassing over 100 participants. Each experimental condition featured 40 individual targets with 5 repetitions per target, culminating in a comprehensive dataset consisting of 21,000 trials of dual-frequency SSVEP recordings. We performed an exhaustive validation of the dataset through signal-to-noise ratio analyses and task-related component analysis, thereby substantiating its reliability and effectiveness for classification tasks. Conclusions The extensive dataset presented is set to be a catalyst for the accelerated development of BCI technologies. Its significance extends beyond the BCI sphere and holds considerable promise for propelling research in psychology and neuroscience. The dataset is particularly invaluable for discerning the complex dynamics of binocular visual resource distribution.
Controlling the in-car environment, including temperature and ventilation, is necessary for a comfortable driving experience. However, it often distracts the driver’s attention, potentially causing critical car accidents. In the present study, we implemented an in-car environment control system utilizing a brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP). In the experiment, four visual stimuli were displayed on a laboratory-made head-up display (HUD). This allowed the participants to control the in-car environment by simply staring at a target visual stimulus, i.e., without pressing a button or averting their eyes from the front. The driving performances in two realistic driving tests—obstacle avoidance and car-following tests—were then compared between the manual control condition and SSVEP-BCI control condition using a driving simulator. In the obstacle avoidance driving test, where participants needed to stop the car when obstacles suddenly appeared, the participants showed significantly shorter response time (1.42 ± 0.26 s) in the SSVEP-BCI control condition than in the manual control condition (1.79 ± 0.27 s). No-response rate, defined as the ratio of obstacles that the participants did not react to, was also significantly lower in the SSVEP-BCI control condition (4.6 ± 14.7%) than in the manual control condition (20.5 ± 25.2%). In the car-following driving test, where the participants were instructed to follow a preceding car that runs at a sinusoidally changing speed, the participants showed significantly lower speed difference with the preceding car in the SSVEP-BCI control condition (15.65 ± 7.04 km/h) than in the manual control condition (19.54 ± 11.51 km/h). The in-car environment control system using SSVEP-based BCI showed a possibility that might contribute to safer driving by keeping the driver’s focus on the front and thereby enhancing the overall driving performance.
The Brain-computer interface (BCI) has evolved into a vital communication medium, offering a direct link to the brain. This medium is a game-changer for individuals with locked-in syndrome, such as Amyotrophic Lateral Sclerosis (ALS). It also holds significant promise for direct communication without the need for typing or voice commands. One of the most effective BCI methods is the use of Steady State Visual Evoked Potential (SSVEP), a unique brain signal condition. However, a new method modality is required to enhance performance, given its relatively slow response time when compared to normal communication. This study aimed to investigate the impact of color and shape on the performance of the SSVEP BCI. The research involved recording 8 channels of EEG signals as participants observed visual stimuli displayed on a monitor. The stimuli included flashing lights with frequency, color, and shape variations. The results showed the highest configuration accuracy of 70% with an Information Transfer Rate (ITR) of 15.5 bits/minute. Despite being statistically insignificant, the results suggest that color and shape have an influence on the SSVEP BCI performance.
Expressing basic needs may seem like a simple task, but not for those with deteriorated or lost ability to speak and move. This paper presents the design, application, and testing of an alternative communication system based on Brain Computer Interface (BCI). The prototype includes six LEDs flickering at different frequencies, and each LED corresponds to one command. Depending on the direction of the gaze of the subject, the neuronal activity pattern in their occipital lobe will be consistent with the targeted LED flickering rate. By recording the electroencephalogram (EEG), and determining the neuronal firing frequency, the system uses Steady State Visual Evoked Potentials (SSVEPs) to convey one of six commands to caregivers. The SSVEP-based system uses an OpenBCI Ganglion 4-channel biosensing board to acquire brain signals and Arduino Uno for system control. Based on preliminary testing on eleven subjects, the overall accuracy of the system was 89%. Accuracy is the percentage at which the system correctly recognized and sent the selected command.
Efficient communication and regulation are crucial for advancing brain-computer interfaces (BCIs), with the steady-state visual evoked potential (SSVEP) paradigm demonstrating high accuracy and information transfer rates. However, the conventional SSVEP paradigm encounters challenges related to visual occlusion and fatigue. In this study, we propose an improved SSVEP paradigm that addresses these issues by lowering the contrast of visual stimuli. visual stimulation. The improved paradigms outperform the traditional paradigm in the experiments, significantly reducing the visual stimulation of the SSVEP paradigm. Furthermore, we apply this enhanced paradigm to a BCI navigation system, enabling two-dimensional navigation of Unmanned Aerial Vehicles (UAVs) through a first-person perspective. Experimental results indicate the enhanced SSVEP-based BCI system's accuracy in performing navigation and search tasks. Our findings highlight the feasibility of the enhanced SSVEP paradigm in mitigating visual occlusion and fatigue issues, presenting a more intuitive and natural approach for BCIs to control external equipment.
No abstract available
No abstract available
Objective This study proposes a new hybrid brain-computer interface (BCI) system to improve spelling accuracy and speed by stimulating P300 and steady-state visually evoked potential (SSVEP) in electroencephalography (EEG) signals. Methods A frequency enhanced row and column (FERC) paradigm is proposed to incorporate the frequency coding into the row and column (RC) paradigm so that the P300 and SSVEP signals can be evoked simultaneously. A flicker (white-black) with a specific frequency from 6.0 to 11.5 Hz with an interval of 0.5 Hz is assigned to one row or column of a 6 × 6 layout, and the row/column flashes are carried out in a pseudorandom sequence. A wavelet and support vector machine (SVM) combination is adopted for P300 detection, an ensemble task-related component analysis (TRCA) method is used for SSVEP detection, and the two detection possibilities are fused using a weight control approach. Results The implemented BCI speller achieved an accuracy of 94.29% and an information transfer rate (ITR) of 28.64 bit/min averaged across 10 subjects during the online tests. An accuracy of 96.86% is obtained during the offline calibration tests, higher than that of only using P300 (75.29%) or SSVEP (89.13%). The SVM in P300 outperformed the previous linear discrimination classifier and its variants (61.90–72.22%), and the ensemble TRCA in SSVEP outperformed the canonical correlation analysis method (73.33%). Conclusion The proposed hybrid FERC stimulus paradigm can improve the performance of the speller compared with the classical single stimulus paradigm. The implemented speller can achieve comparable accuracy and ITR to its state-of-the-art counterparts with advanced detection algorithms.
Steady-state visual-evoked potential (SSVEP)-based brain–computer interfaces (BCIs) are prominent in the information interaction field due to their noninvasive nature. Fixed-window-based classification for labeled data fails to capture the dynamics of the whole process in target selection and recognition, leading to resource inefficiencies. In response, we propose an attention-focused triggering (AFT) strategy for dynamic classification (DC), drawing inspiration from the vision-based attention system and dynamic window detection. Specifically, the attention and visual regions are extracted from the entire brain, with the attention region guiding vision to focus on stimuli, facilitating the detection in the visual region. This strategy quantifies attention indicators in the attention region and sets the data onset for target selection by judging concentration levels. Subsequently, the visual region data are dynamically truncated based on the prediction coefficient to enhance decision-making efficiency. As a result, the proposed method can minimize the necessary data length without compromising accuracy. In an experiment involving a nine-target BCI with 15 healthy subjects, the results demonstrate that relative to the fixed window strategy with markers, the proposed method has superior accuracy (an increase of 17.4 $\dot{\%}$ ) and an elevated information transfer rate (ITR) (rising by 24.9 bits/min), which can enhance the adaptability of online label-free BCI systems.
No abstract available
This letter aims to improve the steady-state visual evoked potential (SSVEP) detection performance in mobile environments. Multiscale analysis of the eight-channel electroencephalogram (EEG) signals is performed using multivariate iterative filtering (MIF). Mode-aligned multivariate modes obtained from MIF are fed to canonical correlation analysis (CCA) for finding the correlation with sine–cosine reference signal. The correlation coefficients from different multivariate intrinsic mode functions are computed as features, which have been classified using machine learning classifiers: k nearest neighbor, linear discriminant analysis (LDA), and support vector machine (SVM). The proposed framework is evaluated using a real-time EEG dataset recorded in a mobile environment with the help of extensive experiments. The LDA classifier provides 88.99%, 84.13%, 81.52%, and 76.62% accuracies for 0.0, 0.8, 1.6, and 2.0 m/s speed, respectively, when classifiers are trained specific to each subject. Subject-independent LDA classifiers achieve 89.49%, 85.00%, 84.20%, and 69.90% accuracies for the aforementioned four different speeds. The MIF-based CCA (MIF-CCA) framework achieved slightly higher accuracy than conventional CCA-based SSVEP detection when the subject was standing or moving at a lower speed, but when the subject was moving at a speed of 2.0 m/s, the average accuracy of MIF-CCA was higher by 21.86%, as compared with CCA algorithm, which shows the usefulness and robustness of the proposed approach. Finally, the proposed feature extraction techniques for mobile EEG signals will be useful for classifying EEG signals in a mobile environment.
Soft robotic glove with brain computer interfaces (BCI) control has been used for post-stroke hand function rehabilitation. Motor imagery (MI) based BCI with robotic aided devices has been demonstrated as an effective neural rehabilitation tool to improve post-stroke hand function. It is necessary for a user of MI-BCI to receive a long time training, while the user usually suffers unsuccessful and unsatisfying results in the beginning. To propose another non-invasive BCI paradigm rather than MI-BCI, steady-state visually evoked potentials (SSVEP) based BCI was proposed as user intension detection to trigger the soft robotic glove for post-stroke hand function rehabilitation. Thirty post-stroke patients with impaired hand function were randomly and equally divided into three groups to receive conventional, robotic, and BCI-robotic therapy in this randomized control trial (RCT). Clinical assessment of Fugl-Meyer Motor Assessment of Upper Limb (FMA-UL), Wolf Motor Function Test (WMFT) and Modified Ashworth Scale (MAS) were performed at pre-training, post-training and three months follow-up. In comparing to other groups, The BCI-robotic group showed significant improvement after training in FMA full score (10.05 ± 8.03, p = 0.001), FMA shoulder/elbow (6.2 ± 5.94, p = 0.0004) and FMA wrist/hand (4.3 ± 2.83, p = 0.007), and WMFT (5.1 ± 5.53, p = 0.037). The improvement of FMA was significantly correlated with BCI accuracy (r = 0.714, p = 0.032). Recovery of hand function after rehabilitation of SSVEP-BCI controlled soft robotic glove showed better result than solely robotic glove rehabilitation, equivalent efficacy as results from previous reported MI-BCI robotic hand rehabilitation. It proved the feasibility of SSVEP-BCI controlled soft robotic glove in post-stroke hand function rehabilitation.
Traditional single-modality brain-computer interface (BCI) systems are limited by their reliance on a single characteristic of brain signals. To address this issue, incorporating multiple features from EEG signals can provide robust information to enhance BCI performance. In this study, we designed and implemented a novel hybrid paradigm that combined illusion-induced visual evoked potential (IVEP) and steady-state visual evoked potential (SSVEP) with the aim of leveraging their features simultaneously to improve system efficiency. The proposed paradigm was validated through two experimental studies, which encompassed feature analysis of IVEP with a static paradigm, and performance evaluation of hybrid paradigm in comparison with the conventional SSVEP paradigm. The characteristic analysis yielded significant differences in response waveforms among different motion illusions. The performance evaluation of the hybrid BCI demonstrates the advantage of integrating illusory stimuli into the SSVEP paradigm. This integration effectively enhanced the spatio-temporal features of EEG signals, resulting in higher classification accuracy and information transfer rate (ITR) within a short time window when compared to traditional SSVEP-BCI in four-command task. Furthermore, the questionnaire results of subjective estimation revealed that proposed hybrid BCI offers less eye fatigue, and potentially higher levels of concentration, physical condition, and mental condition for users. This work first introduced the IVEP signals in hybrid BCI system that could enhance performance efficiently, which is promising to fulfill the requirements for efficiency in practical BCI control systems.
Brain-Computer Interface (BCI) initially gained attention for developing applications that aid physically impaired individuals. Recently, the idea of integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to enhance the quality of life for individuals with disabilities but also to develop mainstream applications for healthy users. One commonly used BCI signal pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures the brain's response to flickering visual stimuli. SSVEP-based BCI-AR applications enable users to express their needs/wants by simply looking at corresponding command options. However, individuals are different in brain signals and thus require per-subject SSVEP recognition. Moreover, muscle movements and eye blinks interfere with brain signals, and thus subjects are required to remain still during BCI experiments, which limits AR engagement. In this paper, we (1) propose a simple adaptive ensemble classification system that handles the inter-subject variability, (2) present a simple BCI-AR framework that supports the development of a wide range of SSVEP-based BCI-AR applications, and (3) evaluate the performance of our ensemble algorithm in an SSVEP-based BCI-AR application with head rotations which has demonstrated robustness to the movement interference. Our testing on multiple subjects achieved a mean accuracy of 80% on a PC and 77% using the HoloLens AR headset, both of which surpass previous studies that incorporate individual classifiers and head movements. In addition, our visual stimulation time is 5 seconds which is relatively short. The statistically significant results show that our ensemble classification approach outperforms individual classifiers in SSVEP-based BCIs.
Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) have been extensively studied due to many benefits, such as non-invasiveness, high information transfer rate, and ease of use. SSVEP-based BCI has been investigated in various applications by projecting brain signals to robot control commands. However, the movement direction and speed are generally fixed and prescribed, neglecting the user’s requirement for velocity changes during practical implementations. In this study, we proposed a velocity modulation method based on stimulus brightness for controlling the robotic arm in the SSVEP-based BCI system. A stimulation interface was designed, incorporating flickers, target and a cursor workspace. The synchronization of the cursor and robotic arm does not require the subject’s eye switch between the stimuli and the robot. The feature vector consists of the characteristics of the signal and the classification result. Subsequently, the Gaussian mixture model (GMM) and Bayesian inference were used to calculate the posterior probabilities that the signal came from a high or low brightness flicker. A brain-actuated speed function was designed by incorporating the posterior probability difference. Finally, the historical velocity was considered to determine the final velocity. To demonstrate the effectiveness of the proposed method, online experiments, including single- and multi-target reaching tasks, were conducted. The extensive experimental results validated the feasibility of the proposed method in reducing reaching time and achieving proximity to the target.
No abstract available
Over the past decades, brain-computer interfaces (BCIs) have been developed to provide individuals with an alternative communication channel toward external environment. Although the primary target users of BCI technologies include the disabled or the elderly, most newly developed BCI applications have been tested with young, healthy people. In the present study, we developed an online home appliance control system using a steady-state visual evoked potential (SSVEP)-based BCI with visual stimulation presented in an augmented reality (AR) environment and electrooculogram (EOG)-based eye tracker. The performance and usability of the system were evaluated for individuals aged over 65. The participants turned on the AR-based home automation system using an eye-blink-based switch, and selected devices to control with three different methods depending on the user’s preference. In the online experiment, all 13 participants successfully completed the designated tasks to control five home appliances using the proposed system, and the system usability scale exceeded 70. Furthermore, the BCI performance of the proposed online home appliance control system surpassed the best results of previously reported BCI systems for the elderly.
ABSTRACT In this paper, we present a bespoke brain–computer interface (BCI), which was developed for a person with severe motor-impairments, who was previously a Violinist, to allow performing and composing music at home. It uses steady-state visually evoked potential (SSVEP) and adopts a dry, low-density, and wireless electroencephalogram (EEG) headset. In this study, we investigated two parameters: (1) placement of the EEG headset and (2) inter-stimulus distance and found that the former significantly improved the information transfer rate (ITR). To analyze EEG, we adopted canonical correlation analysis (CCA) without weight-calibration. The BCI for musical performance realized a high ITR of 37.59 ± 9.86 bits min−1 and a mean accuracy of 88.89 ± 10.09%. The BCI for musical composition obtained an ITR of 14.91 ± 2.87 bits min−1 and a mean accuracy of 95.83 ± 6.97%. The BCI was successfully deployed to the person with severe motor-impairments. She regularly uses it for musical composition at home, demonstrating how BCIs can be translated from laboratories to real-world scenarios.
No abstract available
In this study, we implemented a new home appliance control system by combining electroencephalography (EEG)-based brain-computer interface (BCI), augmented reality (AR), and internet of things (IoT) technologies. We adopted a steady-state visual evoked potential (SSVEP)-based BCI paradigm for the implementation of a fast and robust BCI system. In the offline experiment, we compared the performances of three BCIs adopting different types of visual stimuli in an AR environment to determine the optimal visual stimulus. In the online experiment, we evaluated the feasibility of the proposed smart home system using the optimal stimulus by controlling three home appliances in real time. The visual stimuli were presented on a see-through head-mounted display (HMD), while the recorded brain activity was analyzed to classify the control command, and the home appliances were controlled through IoT. In the offline experiment, a grow/shrink stimulus (GSS) consisting of a star-shaped flickering object of varying size was selected as the optimal stimulus, eliciting SSVEP responses more effectively than the other options. In the online experiment, all users could turn the BCI-based control system on/off whenever they wanted using the eye-blinking-based electrooculogram (EOG) switch, and could successfully perform all the designated control tasks without difficulty. The average classification accuracy of the SSVEP-BCI-based control system was 92.8%, with an information transfer rate (ITR) of 37.4 bits/min. The proposed system exhibited an excellent performance, surpassing the best results reported in previous studies regarding external device control based on BCI using an HMD as rendering device.
The paper presents a collection of electroencephalography (EEG) data from a portable Steady State Visual Evoked Potentials (SSVEP)-based Brain Computer Interface (BCI). The collection of data was acquired by means of experiments based on repetitive visual stimuli with four different flickering frequencies. The main novelty of the proposed data set is related to the usage of a single-channel dry-sensor acquisition device. Different from conventional BCI helmets, this kind of device strongly improves the users’ comfort and, therefore, there is a strong interest in using it to pave the way towards the future generation of Internet of Things (IoT) applications. Consequently, the dataset proposed in this paper aims to act as a key tool to support the research activities in this emerging topic of human-computer interaction.
The assistive, adaptive, and rehabilitative applications of EEG-based robot control and navigation are undergoing a major transformation in dimension as well as scope. Under the background of artificial intelligence, medical and nonmedical robots have rapidly developed and have gradually been applied to enhance the quality of people's lives. We focus on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities. Several types of robots have been controlled using BCI systems to complete real-time simple and/or complicated tasks with high performances. In this paper, a new EEG-based intelligent teleoperation system was designed for a mobile wall-crawling cleaning robot. This robot uses crawler type instead of the traditional wheel type to be used for window or floor cleaning. For EEG-based system controlling the robot position to climb the wall and complete the tasks of cleaning, we extracted steady state visually evoked potential (SSVEP) from the collected electroencephalography (EEG) signal. The visual stimulation interface in the proposed SSVEP-based BCI was composed of four flicker pieces with different frequencies (e.g., 6 Hz, 7.5 Hz, 8.57 Hz, and 10 Hz). Seven subjects were able to smoothly control the movement directions of the cleaning robot by looking at the corresponding flicker using their brain activity. To solve the multiclass problem, thereby achieving the purpose of cleaning the wall within a short period, the canonical correlation analysis (CCA) classification algorithm had been used. Offline and online experiments were held to analyze/classify EEG signals and use them as real-time commands. The proposed system was efficient in the classification and control phases with an obtained accuracy of 89.92% and had an efficient response speed and timing with a bit rate of 22.23 bits/min. These results suggested that the proposed EEG-based clean robot system is promising for smart home control in terms of completing the tasks of cleaning the walls with efficiency, safety, and robustness.
This research was aimed at presenting performance of 3-dimensional input convolutional neural networks for steady-state visual evoked potential classification in a wireless EEG-based brain-computer interface system. Overall performance of a brain-computer interface system depends on information transfer rate. Parameters such as signal classification accuracy rate, signal stimulator structure, and user task completion time affect information transfer rate. In this study, we used 3 types of signal classification methods that are 1-dimensional, 2-dimensional, and 3-dimensional input convolutional neural network. According to online experiment with using 3-dimensional input convolutional neural network, we reached average classification accuracy rate and average information transfer rate as 93.75% and 58.35 bit/min, respectively. This both results significantly higher than the other methods that we used in experiments. Moreover, user task completion time was reduced with using 3-dimensional input convolutional neural network. Our proposed method is novel and state-of-art model for steady-state visual evoked potential classification.
Human-Autonomy Teaming (HAT) has become one of the emerging AI trends due to the advances in sophisticated machine design that allows closer cooperation with humans while performing moral, reasonable, and applicable tasks as humans’ most exemplary assistants. Based on HAT’s pursuing the collective goal and sharing the authority between humans and machines, our research aims at answering whether humans’ brain-computer interface (BCI) helps achieve efficient collaborations of human with Reinforcement Learning (RL) agents. How can it efficiently facilitate human-in-the-loop guidance to bootstrap the training of the agents? This study proposes a BCI-based system that interacts with RL agents as a human-in-the-loop teaming integration. The neural responses elicited by the Steady-State Visual Evoked Potential in BCI facilitate the collaboration of learning agents with humans and accomplish this goal in a game simulation environment. The results of our proposed system, NeuroRL, show significant improvement by reducing the non-stationarity of exploitations and explorations in the RL agents. With BCI-assisted human-in-the-loop, the rewards can be optimized during the early investigations to achieve more efficient convergence in the training. The novel design proposed in this study can extend the development of the emerging HAT field and knowledge-based RL systems for various applications in dynamic environments.
Steady-state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli-responsive hybrid speller by using electroencephalography (EEG) and video-based eye-tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)-based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI-speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI-spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued-spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free-spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI-based system will ultimately enable a truly high-speed communication channel.
One of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot. The BCI sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each industrial part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through the BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot, which locates the defective part by a second camera and removes it from the conveyor. The robot can grasp various part with our random grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA model is further validated in an online experiment and achieved 93% accuracy in identifying and removing defective parts.
Brain-computer interface (BCI) is a technology that provides a direct communication channel between a user and the external environment using the user's brain activity. For the past decades, however, most BCI systems were tested with young people, even though the elderly are the primary target users of the BCI systems. Moreover, it has been frequently reported that a BCI system with the elderly showed significantly lower performance than that with young people. In the present study, to evaluate the feasibility of a steady-state visual evoked potential (SSVEP)-BCI-based home appliance control system, seventeen people over the age of 65 were recruited in an offline experiment in which their SSVEP responses were recorded while the visual stimuli were presented in an augmented reality (AR) environment via a see-through head-mounted display (HMD). The average classification accuracy of 94.1% and an information transfer rate (ITR) of 46.5 bits/min were achieved with respect to the window size of 3 s. Based on the results from the offline experiment, we tested the proposed online home appliance control system with a 65-year-aged female while the online experiment is still ongoing, increasing the number of the participants.
No abstract available
ABSTRACT The objective of this study was to assess the feasibility of hybrid SSVEP + P300 visual BCI systems for quad-copter flight control in physical world. Existing BCI-based quad-copter flight control has limitations of slow navigation, lower system accuracy, rigorous user training requirement and lesser number of independent control commands. So, there is need of hybrid BCI design that combines evoked SSVEP and P300 potentials to control flight direction of quad-copter movement. GUI design is developed such that user can effectively control quad-copter flight by gazing at visual stimuli buttons that produce SSVEP & P300 potentials simultaneously in human cortex. We compare the performance metrics of the proposed flight control systems with other existing BCI-based flight control as conventional SSVEP BCI and P300 BCI and commercially available keyboard flight control systems. Results proved that the proposed system outperforms the existing BCI-based flight control systems but has slightly lower performance efficiency than the commercial keyboard flight control systems. Further, the proposed quad-copter flight control system proved its suitability for patients with severe motor disabilities.
A practical brain-computer interface (BCI) system is challenging research to develop software and hardware. Steady-state visual evoked potential (SSVEP) is the most popular use for spelling and steering an electric wheelchair. SSVEP achieves high accuracy and less time for training. However, an SSVEP response was not clearly present for all users. Therefore, this work proposed an improvement to the SSVEP visual stimulation pattern. We demonstrated a modified visual stimulus pattern by mixing fundamental and harmonic flickering frequencies for enhancing SSVEP response. In the experiment, we observed that the proposed visual stimulus pattern could yield a high amplitude of SSVEP than the conventional visual stimulus pattern, around 9%. We will further add participants and implement for online BCI system.
Brain–computer Interface (BCI) is actively involved in optimizing the communication medium between the human brain and external devices.Objective. Rapid serial visual presentation (RSVP) is a robust and highly efficient BCI technique in recognizing target objects but suffers from limited target selections. Hybrid BCI systems that combine steady-state visual evoked potential (SSVEP) and RSVP can mitigate this limitation and allow users to operate on multiple targets. Approach. This study proposes a novel hybrid SSVEP-RSVP BCI to improve the performance of classifying the target/non-target objects in a multi-target scenario. In this paradigm, SSVEP stimulation helps in identifying the user’s focus location and RSVP stimuli that elicit event-related potentials differentiate target and non-target objects. Main results. The proposed model achieved an offline accuracy of 81.59% by using 12 electroencephalography (EEG) channels and an online (real-time) accuracy of 78.10% when only four EEG channels are considered. Further, the biomarkers of physiological states are analyzed to assess the cognitive states (mental fatigue and user attention) of the participants based on resting theta and alpha band powers. The results indicate an inverse relationship between the BCI performance and the resting EEG power, validating that the subjects’ performance is affected by physiological states for long-term use of the BCI. Significance. Our findings demonstrate that the combination of SSVEP and RSVP stimuli improves the BCI performance and further enhances the possibility of performing multiple user command tasks, which are inevitable in real-world applications. Additionally, the cognitive state biomarkers discussed imply the need for an efficient and attractive experimental paradigm that reduces the physiological state disparities and provide enhanced BCI performance.
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower set prices. Unfortunately, such systems are substantially limited in terms of the number of generated commands. This especially applies to sets that are not medical devices. This article proposes a hybrid brain-computer system based on the Steady-State Visual Evoked Potential (SSVEP), EOG, eye tracking, and force feedback system. Such an expanded system eliminates many of the particular system shortcomings and provides much better results. The first part of the paper presents information on the methods applied in the hybrid brain-computer system. The presented system was tested in terms of the ability of the operator to place the robot’s tip to a designated position. A virtual model of an industrial robot was proposed, which was used in the testing. The tests were repeated on a real-life industrial robot. Positioning accuracy of system was verified with the feedback system both enabled and disabled. The results of tests conducted both on the model and on the real object clearly demonstrate that force feedback improves the positioning accuracy of the robot’s tip when controlled by the operator. In addition, the results for the model and the real-life industrial model are very similar. In the next stage, research was carried out on the possibility of sorting items using the BCI system. The research was carried out on a model and a real robot. The results show that it is possible to sort using bio signals from the human body.
No abstract available
The loss of attention while driving is considered as one of the reasons for the increase in road accidents' rate. Car manufacturers equip some models with the visualization systems projecting assistive information on the windshield to solve this problem. Also, technically simpler devices have become widespread, in which the image is formed by an LCD or a cell phone screen lying on the dashboard. However, the solution mentioned removes the problem of loss of attention only partially because of the necessity to interact with peripheral devices. In this paper, an approach providing bidirectional human-vehicle interaction is presented. To create the control flow from a car driver to peripheral devices, the brain-computer interface technology is used. For this purpose, we implemented an additional mode of the head-up display functioning: presentation of flickering visual stimuli. Classification methods of the EEG brain activity determine the stimulus on which the user's attention is focused, and the command associated is then performed. To examine the performance of this approach the test platform was made and the series of tests were carried out. The results obtained validate the efficiency of the proposed method and indicate the direction for further improvement.
A crucial element lost in the context of a neurodegenerative disease is the possibility to freely explore and interact with the world around us. The work presented in this paper is focused on developing a brain-controlled Assistive Device (AD) to aid individuals in exploring the world around them with the help of a computer and their thoughts. By using the potential of a noninvasive Steady-State Visual Evoked Potential (SSVEP)-based Brain Computer Interface (BCI) system, the users can control a flying robot (also known as UAV or drone) in 3D physical space. From a video stream received from a video camera mounted on the drone, users can experience a degree of freedom while controlling the drone in 3D. The system proposed in this study uses a consumer-oriented headset, known as Emotiv Epoch in order to record the electroencephalogram (EEG) data. The system was tested on ten able-bodied subjects where four distinctive SSVEPs (5.3 Hz, 7 Hz, 9.4 Hz and 13.5 Hz) were detected and used as control signals for actuating the drone. A highly customizable visual interface was developed in order to elicit each SSVEP. The data recorded was filtered with an 8th order Butterworth bandpass filter and a fast Fourier transform (FFT) spectral analysis of the signal was applied in other to detect and classify each SSVEP. The proposed BCI system resulted in an average Information Transfer Rate (ITR) of 10 bits/min and a Positive Predictive Value (PPV) of 92.5%. The final conducted tests have demonstrated that the system proposed in this paper can easily control a drone in 3D space.
The use of quadcopters is increasing in more and more fields in daily lives and is not limited to military applications from where they originated. They are moving towards entertainment, real-estate, delivery, and so on. The unconventional man-machine interface is a generous topic to explore now and in the future. One among them is Brain-Computer Interface (BCI) which has proven to be a very powerful tool to establish communication without any motor movements of the limbs. BCI based on motor imagery (MI) requires very long training sessions to be used effectively. On the other hand, BCI based on steady-state visual evoked potential (SSVEP) has a limited number of sessions because electroencephalography (EEG) signal detection time (signal window length) and accuracy get the highest priority as performance parameters. This paper presents mathematical modeling and numerical simulation of a quadcopter and BCI. An application is presented with the help of a DJI Flight Simulator and an Emotiv Epoc+ headset.
Several kinds of brain–computer interface (BCI) systems have been proposed to compensate for the lack of medical technology for assisting patients who lose the ability to use motor functions to communicate with the outside world. However, most of the proposed systems are limited by their nonportability, impracticality, and inconvenience because of the adoption of wired or invasive electroencephalography acquisition devices. Another common limitation is the shortage of functions provided because of the difficulty of integrating multiple functions into one BCI system. In this paper, we propose a wireless, noninvasive and multifunctional assistive system which integrates steady state visually evoked potential-based BCI and a robotic arm to assist patients to feed themselves. Patients are able to control the robotic arm via the BCI to serve themselves food. Three other functions: 1) video entertainment; 2) video calling; and 3) active interaction are also integrated. This is achieved by designing a functional menu and integrating multiple subsystems. A refinement decision-making mechanism is incorporated to ensure the accuracy and applicability of the system. Fifteen participants were recruited to validate the usability and performance of the system. The averaged accuracy and information transfer rate achieved is 90.91% and 24.94 bit per min, respectively. The feedback from the participants demonstrates that this assistive system is able to significantly improve the quality of daily life.
最终合并的分组全面展现了视觉脑机接口(SSVEP-BCI)从基础理论到工程应用的演进。研究不仅在后端算法(深度学习、自适应滤波)和前端诱发范式(双频编码、多模态融合)上持续突破,以提升ITR和用户舒适度;更在应用层面实现了跨越式发展,覆盖了从医疗康复、老年人居家辅助到工业协作、无人机及智能驾驶等高实时、复杂交互领域。同时,数据集的开源与疲劳评价体系的建立,标志着该领域正从实验室研究向标准化、产业化的阶段迁移。