Scientists have used a type of artificial intelligence known as a large language model to gain new insights into how the human brain perceives and produces language. (Image credit: Yuichiro Chino/Getty Images)
Using artificial intelligence (AI), researchers have uncovered the complex patterns of brain activity that occur during ordinary conversations.
The study's authors say the tool could provide fresh insights into the neuroscience of language and could help improve technologies designed to recognize speech or help people communicate in the future.
By building on the methods the AI model uses to convert audio into text, the researchers behind the study were able to more accurately map the brain activity that occurs during communication than traditional models that encode specific aspects of language structure, such as phonemes (the basic sounds that make up words) and parts of speech (such as nouns, verbs, and adjectives).
The model used in the study, called Whisper, takes audio files and corresponding text transcripts, which serve as training data for matching audio to text. It then uses the statistics of this matching to “learn” to predict text from new audio files it hasn’t heard before.
Whisper thus operates solely on these statistics, without any language structures encoded in the initial parameters. However, in the study, the researchers demonstrated that these structures still showed up in the model after it was trained.
The study sheds light on how these types of AI models — called large language models (LLMs) — work. But the research team is more interested in the insights they provide into human language and cognition. Finding parallels between how the model develops language-processing capabilities and how humans develop these skills could be useful for creating devices that facilitate communication.
“It’s really about how we think about cognition,” said lead study author Ariel Goldstein, an associate professor at the Hebrew University of Jerusalem. The study’s findings suggest that “we should think about cognition through the lens of this [statistical] type of model,” Goldstein told Live Science.
Unpacking knowledge
The study, reported March 7 in the journal Nature Human Behaviour, involved four epilepsy patients who had undergone surgery to implant electrodes to monitor brain activity.
With the patients' consent, the researchers recorded all conversations during their hospital stay, which lasted from a few days to a week. In total, they recorded more than 100 hours of audio.
Each participant had between 104 and 255 electrodes placed to monitor brain activity.
Most studies using recorded conversations are conducted in a lab setting in a strictly controlled environment for an hour, Goldstein said. While such a controlled setting can be useful for teasing out the roles of various variables, Goldstein and his colleagues were looking to “study real-life brain activity and human behavior.”
Their research demonstrated how different areas of the brain are activated when performing tasks required to understand and produce speech.
Goldstein noted that there is ongoing debate about whether individual parts of the brain are involved in these tasks, or whether the entire organ functions more collectively. The former hypothesis might involve one area of the brain processing the sounds that make up words, while another interprets the meaning of those words, and a third is responsible for the movements needed for speech.
An alternative theory, Goldstein says, proposes that different parts of the brain work in concert, using a “distributed” approach.
Researchers have found that certain areas of the brain do indeed correlate with specific tasks.
For example, areas known for their role in processing sound, such as the superior temporal gyrus, showed increased activity when processing auditory information, and areas responsible for higher cognitive functions, such as the inferior frontal gyrus, were more active when understanding the meaning of language.
They also noticed that the regions were activated sequentially.
Sourse: www.livescience.com