A team of scientists has created an algorithm that enables an artificial intelligence-based “brain decoder” trained on one person to interpret the thoughts of another person with minimal training. (Image credit: Jerry Tang/University of Texas at Austin)
Researchers have made new improvements to a 'brain decoder' that uses artificial intelligence (AI) to convert thoughts into text.
Their new converter algorithm can quickly adapt an existing decoder to another person's brain, the team reported in a recent study. The scientists say the results could one day help people suffering from aphasia, a disorder that makes it difficult to communicate.
The Brain Decoder uses machine learning to translate a person’s thoughts into text based on their brain’s responses to stories they heard. However, previous versions of the decoder required participants to spend many hours in an MRI machine listening to stories, and only worked for people they were trained on.
“People with aphasia often have difficulty understanding language and producing it,” says study co-author Alexander Huth, a computational neuroscientist at the University of Texas at Austin. “So if that’s true, we may not be able to build models of their brains at all by watching their reactions to the stories they hear.”
In a new study published Feb. 6 in the journal Current Biology, Huth and co-author Jerry Tang, a graduate student at UT Austin, explored how to overcome that limitation. “In this study, we asked ourselves, ‘Can we do something differently?’” he said. “Can we essentially transfer a decoder that’s built for one brain to another person’s brain?”
The researchers first trained the decoder on a group of control participants using a long-term approach—collecting functional MRI data while the participants listened to 10 hours of radio broadcasts.
They then trained two conversion algorithms using data from control participants and from another group of “target” participants: one algorithm used data collected during 70 minutes of listening to radio broadcasts, and the other used data collected during 70 minutes of watching silent Pixar short films unrelated to the radio broadcasts.
Using a technique called functional alignment, the team mapped how the brains of reference and target participants responded to the same audio or film stories. This information was used to train the decoder to interact with the brains of target participants without having to collect hours of training data.
The team then tested the decoders using a short story that none of the participants had heard before. Although the decoder’s predictions were slightly more accurate for the original participants than for those using the converters, the words it predicted based on each participant’s brain scan were still semantically related to those used in the test story.
For example, one test story passage discussed a job that someone disliked, saying, “I’m a waitress at an ice cream parlor. So, uh, it’s not… I don’t know where I want to be, but I know it’s not that.” A decoder using a converter algorithm trained on movie data predicted, “I worked at a job that I thought was boring. I took orders, and I didn’t like them, so I worked on them every day.” While there’s not an exact match—the decoder doesn’t read the exact sounds that participants heard, Huth noted—the ideas are related.
“What's really amazing and remarkable is that we can do this without even using language data,” Huth told Live Science. “We can have data that we collect while someone is watching a silent video, and then we can use that to create this language decoder for their brain.”
Using video converters to adapt existing decoders for people with aphasia could help them express themselves, the researchers say. It also suggests there is some overlap between the way people represent ideas from language and visual narratives in the brain.
Sourse: www.livescience.com