AI Deep Dive: Training a Transformer Model on EEGs

Friday, September 30 at 1:00pm the Data Science Institute will host Prof. Sasha Key in a discussion of applying transformer deep learning models to the problem of analyzing multichannel EEG in response to multiple stimulus/recording conditions (e.g., faces vs. objects, speech vs. nonspeech, attend vs. ignore, etc.). Transformers are powerful sequence learners, and can be trained using unlabeled data using self-supervised learning. These trained models can be used as a foundation to solve a wide variety of tasks. One application of a trained EEG model could be to classify individual subjects into clinical/control group or predict treatment responder/nonresponder. Come and join our discussion of transformers, possible applications to EEG, and possible applications of EEG trained models!