The brain may use similar mechanisms to link related concepts as it does to navigate physical space, according to a new study. (Image credit: kontekbrothers via Getty Images)
When we explore a new city, we often rely on maps and landmarks to determine the fastest and most reliable route between two points. However, new research suggests that our brains can use similar processes to “navigate” between related concepts.
Scientists have developed a mathematical model to understand how the brain processes both spatial and semantic information. The latter includes knowledge about the meaning and importance of different people, places, and objects; brain activity associated with these concepts occurs both when a person, place, or object is perceived in real time and when it is recalled.
The model demonstrated how spatial and semantic information can be represented in the same brain regions, suggesting that the brain can process both types of data in similar ways, the researchers reported March 10 in the journal PNAS.
Two brain regions involved in memory and navigation—the hippocampus and the entorhinal cortex—contain neurons that fire when people move around their physical environment. They also have neurons that fire in response to specific concepts or ideas, known as concept cells. This has led scientists to speculate that these thought processes may be linked.
“Spatial representations and conceptual representations, and semantic computation and spatial computation, seem very different at first glance,” study co-author Tatsuya Haga, a computational neuroscientist at the National Institute of Information and Communications Technology in Japan, told Live Science. Semantic and spatial computation refer to how the brain and computers process information in these distinct domains.
“However, there is a connection between these two different aspects,” Haga said. “So perhaps the brain, especially the hippocampus and entorhinal cortex, uses a single principle to compute many things, including language.”
Haga and his team developed a mathematical model that mimics certain functions of the hippocampus to show how these modes of thought are linked. The model combines two functions that help control how the processing center moves from one location or idea to another: a successor representation, which predicts the likelihood of moving from one physical space to another, and a word embedding, which captures the connections between words.
The team then asked their model to navigate a simulated physical or conceptual space. The “physical” space was a simulated structure, sometimes with individual rooms, while the conceptual space involved bridging the metaphorical “distance” between related words using analogies.
In response to these tasks, the model generated patterns that resembled the activity of two types of neurons in the hippocampus and entorhinal cortex: one responsible for spatial perception, and the other for concept recognition.
The team demonstrated that the same algorithm that can be used to navigate virtual spaces can also grasp relationships between related concepts, such as countries and their capitals. In this case, to move from the concept of “France” to the concept of “Berlin,” the model might first activate the concept cell for capitals, which would take it from “France” to “Paris,” and then activate another cell representing “Germany,” which would take it to “Berlin.”
“When you’re trying to navigate a maze-like city, you need some kind of map with landmarks and routes,” Rob Mock, a computational neuroscientist at Royal Holloway, University of London, who was not involved in the study, told Live Science. “And the idea is that you can do that while you’re also thinking.”
The model can use various analogies to overcome the metaphorical distance between different semantic concepts.
“So if I think about the dog, how do I get to the 'cat'? Or how do I get to the 'king'?”
Sourse: www.livescience.com