Notice and Invitation
Oral Defense of Doctoral Dissertation
The Volgenau School of Engineering, George Mason University 
 
Marjan Saadati 

Bachelor of Science, Electrical Engineering, Chamran University of Ahvaz, 2005 Master of Science, Electrical Engineering, Chamran University of Ahvaz, 2009 
 
Topic: Multimodal Deep Learning Algorithms for Spatio-Temporal-Spectral Feature Extraction and Classification in fNIRS and fNIRS-EEG 

 

 

Time: Jul 16, 2021 09:00 PM Eastern Time (US and Canada) 

Zoom Meeting Link: https://gmu.zoom.us/j/96993913334 

Meeting ID: 969 9391 3334 

 

All are invited to attend.  



Committee  


Dr. Jill Nelson, Chair  

Dr. Hasan Ayaz, Co-advisor  

Dr. Vasiliki N Ikonomidou  

Dr. James Jones  

 

Abstract 

 

Accurate brain activity classification is essential for decoding mental states and motor cortical activities, both of which are critical to advancing Brain-Computer Interfaces (BCI) and adaptive Human-Machine Interfaces (HMI). Brain activity classification is used to increase the safety and operator performance in complex HMIs and to enhance the accuracy of decoding the user's intention in BCIs. HMIs and BCIs have extensive applications in fields such as in aerospace, robotic surgery, and neurorehabilitation tools for patients with brain injuries.  

Among noninvasive neuroimaging techniques used for decoding brain activities, functional Near Infrared Spectroscopy (fNIRS) and Electroencephalography (EEG) are promising sensing modalities. By combining these two modalities (denoted by fNIRS-EEG), we can leverage their complementary temporal and spatial resolutions and sensitivities to different cascades of events to capture as much information as possible to aid in brain activity classification. 

While a variety of both classical and modern classification techniques have been explored for fNIRS and fNIRS-EEG data, Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have received only minimal attention. A significant advantage of the CNN compared to other classification methods is that it does not require prior feature selection or computationally demanding preprocessing or denoising. However, since CNN accepts images as input, it is necessary to convert neural recordings into 2D or 3D images in a manner that captures as much temporal, spectral, and spatial information from the dataset as possible to achieve unvarying output regardless of in- and across-subject differences.  

This thesis focuses on the robust investigation of deep learning architectures and optimization methods to perform brain activity classification while considering the latency of fNIRs relative to EEG and eliminating complex preprocessing and detrending methods. We propose and compare a variety of architectures for classification of hybrid fNIRS-EEG data and standalone fNIRS data. These architectures are applied to spatial, spectral, and temporal representations of neuronal signals. Architectures are benchmarked using mental workload memory and motor imagery tasks from multiple open-source meta-datasets. The proposed methods and representations demonstrate significantly higher performance than common methods such as Support Vector Machines (SVMs) in both inter- and intra-subject investigations. Complementary statistical measures and comparison between oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) as the selected features for the classification confirm the robustness of the methods. Accordingly, this emerging technology shows great promise for use in other cognitive and neuroergonomic task classification using neuroimaging.