Advertisement

Artificial intelligence to translate brain signals/thoughts directly into words

Neurologists at Columbia University in New York have developed a system that translates thought into intelligible words, getting transposed brain signals to words for the first time.


mind-controlled hearing aid to be developed by scientists using AI
mind-controlled hearing aid to be developed by scientists using AI

This technology, based on voice synthesizers and artificial intelligence, reconstructs the words a person hears with unparalleled clarity.
According to Nima Mesgarani(PhD and principal investigator) and his colleagues, this breakthrough marks a decisive step towards the creation of brain-computer interfaces that will allow people whose speech ability is limited or non-existent to express themselves, especially those who live with sequelae of stroke or with amyotrophic lateral sclerosis or ELA, a disease of the nervous system that attacks nerve cells (neurons) and the spinal cord.

Also, according to the researchers, this development could lead to new forms of communication between computers and the human brain. "Our voice helps us communicate with our friends, our family and the world around us, so losing their use as a result of injury or illness is terrible," Nima Mesgarani said in a statement. He adds: "our study represents a way to restore this power. We show that with the right technology, any listener can decipher and understand another person's thoughts. "


Summarizing the entire project

Brain patterns of words

In recent decades, the work of neuroscientists has shown that certain particular patterns of activity appear in the brain when a person expresses himself with the word (or imagines he does). Other particular signals also appear in the brain when one person listens to another.
These two observations have led researchers to record and decode these patterns or neural code, in order to perceive the thoughts that circulate in the brain and translate them into words.
The previous efforts of this team, but also of other research groups, have focused on simple computer models that analyze spectrograms, which are visual representations of sound frequencies. But this approach has not been successful, since it has not been able to reproduce intelligible sounds similar to those of speech.
As a result of this failure, Nima Mesgarani's team abandoned the spectrogram and turned to a vocoder, a computer algorithm that can synthesize speech after being trained to listen to human conversations.


Brain controlled hearing aid methodology
methodology

Improved technology

The vocoder is a speech analyzer and synthesizer developed in the 1930s as a voice coder for telecommunications, which has been recovered for this technology.

For this research, neuroscientists taught a vocoder to interpret a person's brain activity. And they did it with the support of Ashesh Dinesh Mehta, an expert in epilepsy who performs regular brain surgery.

"We asked epileptic patients to listen to the prayers uttered by different people, while measuring their patterns of brain activity," Mesgarani explains. These neural patterns were the ones that served to train the vocoder.
Then, the researchers asked these same people to listen to the audible numbers from 0 to 9 spoken by the speakers. During that process, they also recorded the brain signals and transmitted them to the vocoder, which broadcasted them as a synthesized voice.

This sound produced by the vocoder in response to the brain signals of numbers was analyzed and cleared by neural networks, a type of artificial intelligence that mimics the structure of neurons in the brain. The result was the emission of a robotic voice reciting a sequence of numbers.




75% reliability

To verify the accuracy, Mesgarani and his team asked people to listen to the recording and report what they heard. And they discovered that people could understand and repeat sounds 75% of the time, which is beyond any previous attempt. "The sensitive vocoder and the powerful neural networks represented the sounds that patients had originally heard with surprising precision," says Mesgarani.

The ultimate goal of this technology is to integrate it into an implant similar to those used by some epileptic patients, which would directly translate the user's thoughts into words. An individual could then think "I need a glass of water" and the system would transform the brain signals generated by this thought into a synthesized discourse.

It is not the first time that neuroscientists manage to break the codes used by the brain to communicate and interpret the environment. Australian neuroscientists recently managed to decipher the nervous code that allows the brain to interpret the information it receives from the senses, opening the possibility.