For patients with advanced amyotrophic lateral sclerosis (ALS), communication becomes increasingly difficult as the disease progresses. In many cases, ALS (also known as Lou Gehrig’s disease) leads to locked-in syndrome, in which a patient is completely paralyzed but remains cognitively intact. Eye tracking devices, and more recently, electroencephalogram (EEG)-based brain-computer interfaces (BCIs), enable ALS patients to communicate by spelling phrases letter by letter, but it can take several minutes to communicate even a simple message.
Magnetoencephalography (MEG) is a noninvasive technique that detects magnetic activity produced by electrical signals occurring naturally in the brain. Researchers at University of Texas at Austin have developed a noninvasive technology that uses wavelets and deep neural networks to decode MEG signals and detect entire phrases as the patient imagines speaking them. MATLAB® enabled the researchers to combine wavelet-based signal processing approaches with a variety of machine learning and deep learning techniques.
“We need to be able to try an approach, visualize the results, and then retrace our steps or try something new if it’s not working well,” says Debadatta Dash, doctoral student in the UT Austin Speech Disorders and Technology Lab. “In another programming language, those iterations can be time-consuming, but with MATLAB we can use extensive signal processing libraries along with toolboxes to rapidly evaluate new ideas and immediately see how well they work.”