How speech can be reconstructed from brainwaves
Researchers at the Karlsruhe Institute of Technology (KIT) have achieved a breakthrough in brain research. They were able to derive sounds from the brain activity of the subjects that the person is currently saying. The researchers even reconstructed words and sentences. The method, which was developed by the Karlsruher together with researchers from the American Wadsworth Center, is called "brain-to-text". Especially for people who suffer from locked-in syndrome, this discovery could be groundbreaking. Their results published the scientists in the journal "Frontiers in Neuroscience".
Language becomes visible through brain waves
"It has long been speculated whether direct communication between man and machine via brain waves is possible," said the computer science professor Tanja Schultz, who led the study at the Cognitive Systems Lab of KIT, in a statement from the Institute. "We were able to show that brainwaves can be used to recognize individual speech sounds and continuously spoken complete sentences."
As part of their investigation, the researchers placed an epithelium of electrodes directly into the cerebral cortex of seven epilepsy patients in the United States. The brain was already exposed to the patients for epilepsy therapy anyway. Using electrodes placed on the head from the outside and measuring brain activity, the procedure is not yet feasible.
"For the first time, we can observe the brain as we speak," Schultz told the news agency "dpa". For the first time, researchers were able to track brain activity right through to activating the muscles of the articulating organs with neurons in the cerebral cortex. The activities were made recognizable by colors. "The higher the activity, the hotter the color," Schultz continues.
Brain researchers create a database of 50 different sounds
The subjects had to speak certain texts, such as a speech by former US President John F. Kennedy or simple nursery rhymes. Since the researchers were aware of when which sound was spoken, they were able to create databases with about 50 sound prototypes based on the measured brain waves. Using algorithms, the researchers were later able to recognize what was said, based solely on the brain waves. The sounds were considered in connection with words and whole sentence phrases. "This gives us nice results, which in terms of quality are still far removed from acoustic speech recognition, but are already much better than they are recommended," says Schultz.
For "brain-to-text" methods of signal processing and automatic speech recognition were used. "In addition to the recognition of speech from brain signals, these allow a detailed analysis of the brain regions involved in the language process and their interactions," explain Christian Herff and Dominic Heger, who developed the system as part of their doctoral studies, in the KIT communication.
So far, however, the new method has only been tested in seven patients, of which only five minutes are available during the examinations. In the future, especially people suffering from the so-called locked-in syndrome could benefit from "brain-dead-text". Although those affected are conscious, they can not understand themselves because of paralysis. (Ag)