Metascientists use AI to decipher magnetic brain scans, revealing how thoughts are converted into printed sentences

Two new studies reveal how we can transform thoughts into written sentences using digital interfaces. (Image credit: Meta)

Researchers at Meta have used artificial intelligence (AI) and non-invasive brain scanning to study the process of converting thoughts into written sentences, as demonstrated in two new studies.

In one study, scientists created an AI model that decoded brain signals to reproduce sentences typed by participants. In another study, the same researchers used AI to visualize how the brain actually forms language, translating thoughts into text.

Scientists say the findings could in future form the basis for a non-invasive brain-computer interface that would be useful for communication for people with brain damage or injury.

“This was a significant step in decoding, especially in a noninvasive approach,” Alexander Huth, a computational neuroscientist at the University of Texas at Austin who was not involved in the study, told Live Science.

Brain-computer interfaces using similar decoding techniques have already been implanted in the brains of people who have lost the ability to speak, but new research may confirm a possible path to wearable devices.

In the first study, the researchers used a technique known as magnetoencephalography (MEG), which records the magnetic field created by electrical impulses in the brain, to track neural activity as participants typed sentences. They then trained an AI language model to decode the brain signals and produce sentences based on the MEG data.

The model decoded the letters entered by participants with 68% accuracy. Common letters were decoded more accurately, while less common letters, such as Z and K, had a higher error rate. When the model made errors, it tended to substitute characters that were physically close to the target letter on the QWERTY keyboard, indicating that the model uses the brain's motor signals to predict the letter entered.

The team’s second study built on these findings to demonstrate how language is formed in the brain during typing. The scientists collected 1,000 MEG images per second as each participant typed several sentences. From this data, they decoded the different stages of sentence formation.

Decoding Your Thoughts with AI

They found that the brain first generates information about the context and meaning of a sentence, and then creates increasingly detailed representations of each word, syllable, and letter as the participant types.

“These results support long-standing suggestions that language production requires a hierarchical decomposition of sentence meaning into smaller units that ultimately control motor actions,” the study authors note.

To avoid mixing up the representations of one word or letter with another, the brain uses a “dynamic neural code” that keeps them separate, the team found. This code is constantly changing, where each piece of information is represented in the brain regions responsible for language.

This allows the brain to associate successive letters, syllables, and words, storing information about each over longer periods of time. However, MEG experiments have not been able to pinpoint exactly where in these brain regions each of these language representations originates.

Together, the two studies, which have not yet been peer-reviewed, could help scientists develop noninvasive devices that can improve communication in people who have lost the ability to speak.

The researchers note that while the current setup is too bulky and sensitive to work properly outside a controlled lab environment, advances in MEG technology could open up new horizons for future wearable devices.

“I think they're really on the cutting edge of what they're doing,” Huth said. “They're definitely making the most of what's available in terms of what they can get out of these signals.”

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *