Generalizing Brain Decoding Across Subjects with Deep Learning
Citation
Richard Csaky , Mats W.J. van Es , Oiwi Parker Jones, and Mark Woolrich. Generalizing Brain Decoding Across Subjects with Deep Learning. arXiv:2205.14102v1
Abstract
Decoding experimental variables from brain imaging data is gaining popularity,
with applications in brain-computer interfaces and the study of neural representations. Decoding is typically subject-specific and does not generalise well over
subjects. Here, we investigate ways to achieve cross-subject decoding. We used
magnetoencephalography (MEG) data where 15 subjects viewed 118 different
images, with 30 examples per image. Training on the entire 1s window following the presentation of each image, we experimented with an adaptation of the
WaveNet architecture for classification. We also investigated the use of subject
embedding to aid learning of subject variability in the group model. We show
that deep learning and subject embedding are crucial to closing the performance
gap between subject and group-level models. Importantly group models outperform subject models when tested on an unseen subject with little available
data. The potential of such group modelling is even higher with bigger datasets.
Furthermore, we demonstrate the use of permutation feature importance to gain
insight into the spatio-temporal and spectral information encoded in the models,
enabling better physiological interpretation. All experimental code is available at
https://github.com/ricsinaruto/MEG-group-decode.
Description
Preprint
Published online at:
Collections
- Neuroscience [36]