Variations of the LSTM Architecture for the Classification of Cognitive Stimuli-Based EEG Signals for BCI Applications

Main Article Content

Prashant Srinivasan Sarkar , E. Grace Mary Kanaga , M. Bhuvaneswari

Abstract

Deep learning is a branch of artificial intelligence most closely linked with the functioning of an actual human brain. Deep neural networks perform millions of simple computations that, when combined, allow for incredibly complex problem-solving capability. With recent breakthroughs in computational technology, neural networks are now larger than ever, capable of performing complex tasks that were hitherto believed to be science fiction. As modern technology grows closer to being capable of mimicking the human brain, researchers around the globe ponder the question: can this technology understand the brain? This experimental analysis of LSTM Deep Neural Networks proposes multiple variations of the traditional LSTM architecture as an optimized method for the classification of electroencephalogram (EEG) brain signals. It explores 5 variations of the LSTM architecture: Vanilla LSTM, Stacked LSTM, Bidirectional LSTM, Stacked Bidirectional LSTM and LSTM with Attention Mechanism. This experimental analysis proposes a hybrid LSTM with Attention mechanism as a potential solution to create efficient, accurate and lightweight brain computer interfaces. In this experimental analysis, the proposed architecture achieved an F1 score of 94% for the classification of cognitive stimuli-based EEG signal data. These architectures show promise in the realm of brain computer interfacing as potential solutions for the future of human-computer interaction.

Article Details

Section
Articles