In crowded rooms with many people talking, it can be difficult to isolate sounds with affordable hearing aids. (Image credit: Rawpixel.com via Shutterstock)
Have you ever had trouble hearing your friend's voice over other conversations in a crowded room? Scientists call this the “cocktail party problem,” and it can be especially difficult for people with hearing loss.
Most hearing aids have directional filters that help users focus on sounds coming from the front. They are most effective at reducing static background noise, but struggle in more challenging listening environments, such as when the user is surrounded by guests at a cocktail party who are all standing close together and speaking at similar volumes.
However, a new algorithm could improve the performance of hearing aids in the context of the cocktail party problem. The model, called the Biologically Inspired Sound Separation Algorithm (BOSSA), takes inspiration from the brain's auditory system, which uses information from both ears to identify sound sources and can filter sound by location.
Alexander Boyd, a graduate student in biomedical engineering at Boston University, compared directional filters and BOSSA to flashlights that illuminate whatever is in their path.
“BOSSA is a new flashlight with a narrower, more selective beam,” he told Live Science. Unlike standard filters, BOSSA should be better at distinguishing speakers — though it has yet to be tested in real-world conditions with the right hearing aids.
Boyd conducted a recent lab test of BOSSA, the results of which were published April 22 in the journal Communications Engineering. During the experiment, participants with hearing loss wore headphones playing audio designed to simulate five people speaking simultaneously and from different angles around the listener.
The sound was processed through either BOSSA or a more traditional hearing aid algorithm, and participants compared both filters with how they perceived the sound without any additional processing.
In each trial, participants were asked to attend to sentences spoken by one of five speakers. The loudness of the “target speaker” relative to the other speakers varied across trials. When the target speaker was within 30 degrees of the listener in any direction, participants were able to discern more words at a lower loudness level with BOSSA than with the conventional algorithm or without assistance.
The regular algorithm appeared to provide users with better discrimination between speech and static noise than BOSSA, but this was only tested on four of the eight participants.
The standard algorithm works by reducing distracting sounds, increasing the signal-to-noise ratio for sounds coming from a particular direction. In contrast, BOSSA converts sound waves into peaks of incoming data that the algorithm can process, much like the cochlea in the inner ear converts vibrations of sound waves into signals transmitted by neurons.
The algorithm mimics how specialized cells in the midbrain — the uppermost part of the brainstem that connects the brain and spinal cord — respond selectively to sounds coming from a given direction. These spatially tuned cells estimate direction based on differences in the timing and volume of sound inputs to each ear.
B Boyd noted that this aspect of BOSSA is based on studies of the midbrain of barn owls, which have complex spatial sensory abilities because they rely on sound cues to locate their prey. The cues processed by BOSSA are then reconstructed into sound for the listener.
BOSSA is modeled on the “bottom-up” attentional pathway of the nervous system, which collects elements of sensory information that are then interpreted by the brain. These sensory inputs determine which aspects of the environment to focus on and which can be ignored.
However, attention is also formed in a top-down manner, where a person’s prior knowledge and current goals influence their perception. In this case, a person can make decisions about what to focus on.
Sourse: www.livescience.com