A US team has developed a non-invasive speech decoder: a brain-computer interface that aims to reconstruct complete sentences from functional magnetic resonance imaging (fMRI). That is, a machine capable of reading thoughts.
Although other attempts at decoders already existed, some are invasive—requiring neurosurgery—and others are not, but only recognized words or short phrases.
In this case, as reported in the journal “Nature Neuroscience,” the team recorded brain responses — recorded with fMRI — as three participants listened to 16 hours of stories. The authors used this data to train the model, which was then able to decode other MRI data from the same person listening to the new stories.
Previous speech decoders have been applied to neural activity recorded after invasive neurosurgery, thereby limiting their use.
The University of Texas team led by Alexander Huth has designed a decoder that reconstructs continuous language from brain patterns derived from fMRI data.
The authors recorded fMRI data from 3 participants as they listened to 16 hours of narrative stories to train the model to map between brain activity and semantic features that captured the meaning of certain phrases and associated brain responses .
This decoding model was tested on participants’ brain responses when listening to new stories that were not used in the original training data set. Using this brain activity, decoder It can generate sequences of words that capture the meaning of new stories and also generate some exact words and phrases from the stories. The authors found that the decoder could infer a continuous language of activity in most areas and networks in the brain that process language.
The authors also found that the decoder, which had been trained on perceived speech, was able to infer from the fMRI data the content of the participant’s imagined story or silent film they had seen.
When a participant actively listened to a story, while simultaneously ignoring the other story played, the decoder could identify the meaning of the story being actively listened to.
This research, explains the Science Media Center David Rodriguez-Arias ValenPhilolab’s deputy director and professor of bioethics at the University of Granada, “demonstrates the ability to decode the minds of people who can communicate, without saying a word, to determine if they are Little Red Riding Hood.” Telling the Story of the Three Little Pigs”.
As is often the case with any technological advancement, it also bears a caveat of responsibility.
David Rodriguez-Arias Valen
University of Granada
Huth and colleagues performed a confidentiality analysis for the decoder and found that when trained on one participant’s fMRI data, it did not perform well in predicting the semantic content of another participant’s data.
In this sense, Rodríguez-Arias Valleh explains, “As is usually the case with all technological advances, it also carries a caveat of responsibility. That a machine could end up reading your mind after being trained , it is possible that it may involuntarily and without your consent (for example, while you sleep) translate fragments of your thoughts. Until now our mind has been the guardian of our privacy. We can, if we wish, jealously guard some of the thoughts which are most shameful., This discovery could be the first step in compromising that freedom in the future.”
The authors conclude that participants’ cooperation is critical to the training and application of these non-invasive decoders. They noted that policies to protect mental privacy may be needed depending on the future development of these technologies.