Placement of the 16 microphones used for cocktail party scenario recordings.
Placement of the 16 microphones used for 'cocktail party scenario' recordings.
Source: Fischer et al., Hearing Research 2021 (CC BY-NC-ND 4.0)

AI improves speech recognition in hearing aid

In noisy environments, it is difficult for hearing aid or hearing implant users to understand their conversational partner because current audio processors still have difficulty focusing on specific sound sources. In a feasibility study, researchers from the Hearing Research Laboratory at the University of Bern and the Inselspital are now suggesting that artificial intelligence could solve this problem.

Hearing aids or hearing implants are currently not very good at selectively filtering specific speech from many sound sources for the wearer – a natural ability of the human brain and sense of hearing known in audiology as the “cocktail party effect”. Accordingly, it is difficult for hearing aid users to follow a conversation in a noisy environment. Researchers at the Hearing Research Laboratory of the ARTORG Center, University of Bern, and Inselspital have now devised an unusual approach to improve hearing aids in this respect: virtual auxiliary microphones whose signals are calculated by artificial intelligence.

The more microphones are available and the more widely they are distributed, the better a hearing aid can focus on sound from a particular direction. Most hearing aids have two microphones close together due to lack of space. In the first part of the study, the Hearing Research Laboratory (HRL) determined that the optimal microphone location (for better focusing) is in the middle of the forehead – though this is a very impractical location. “We wanted to get around this problem by adding a virtual microphone to the audio processor using artificial intelligence,” said Tim Fischer, a postdoctoral researcher at HRL, explaining this unconventional approach.

Cocktail party data

For the study setup, ARTORG Center engineers used the “Bern Cocktail Party Dataset”, a collection of a variety of noise scenarios with multiple sound sources from multi-microphone recordings of hearing aid or cochlear implant users. Using 65 hours of audio recordings (more than 78,000 audio files), they trained a neural network to refine a commonly used directionality algorithm (beamformer). For improved speech understanding the deep learning approach calculated additional virtual microphone signals from the audio data mixture. 20 subjects tested the AI-enhanced hearing in a subjective hearing test accompanied by objective measurements. Particularly in cocktail party settings, the virtually sampled microphone signals significantly improved the speech quality. Hearing aid and cochlear implant users could therefore benefit from the presented approach, especially in noisy environments.

"I think that artificial intelligence represents an important contribution to the next generation of hearing prostheses, as it has great potential for improving speech understanding, especially in difficult listening situations," says Marco Caversaccio, Chief Physician and ENT Department Head.

As auditory assistive technologies and implants are a major focus of research at the Inselspital, important data-based foundations are being laid here for further development that should bring the natural hearing experience closer. The novel approaches will directly benefit patients within the framework of translational studies.

Recommended article

Outlook

Although within this study the virtually added microphones significantly improved the quality of speech understanding with hearing aids, further studies still need to overcome some technical hurdles before the methodology can be used in hearing aids or cochlear implant audio processors. This includes, for example, a stable functioning directional understanding even in reverberant environments.

Subscribe to our newsletter

Related articles

Artificial intelligence betters holographic displays

Artificial intelligence betters holographic displays

Researchers are developing new techniques for improving 3D displays for virtual and augmented reality technologies.

Neural network predicts eye movements

Neural network predicts eye movements

Scientists develop a software that can be used in combination with MRI data for research and diagnosis.

AI Eve augments genetic tests

AI Eve augments genetic tests

AI model called EVE shows remarkable capacity to interpret the meaning of gene variants in humans as benign or disease-causing.

Quantum sensors for next-gen brain-computer interfaces

Quantum sensors for next-gen brain-computer interfaces

Recently, Professor Surjo R. Soekadar outlined current and upcoming applications of brain-computer interfaces.

An AI that ‘thinks’ like humans

An AI that ‘thinks’ like humans

Creating human-like AI is about more than mimicking human behaviour – technology must also be able to process information, or ‘think’, if it is to be fully relied upon.

Researchers psychoanalyse artificial intelligence

Researchers psychoanalyse artificial intelligence

We can run tests and experiments, but we cannot always predict and understand why AI does what it does.

The progress and risks of artificial intelligence

The progress and risks of artificial intelligence

Artificial intelligence has reached a critical turning point in its evolution, according to an international panel of experts.

Has graphene lived up to its potential in medical technology?

Has graphene lived up to its potential in medical technology?

Graphene represents incredible opportunities for advancement in many fields, including medical science.

Implantable AI platform detects pathological changes

Implantable AI platform detects pathological changes

Scientists have developed a bio-compatible implantable AI platform that classifies in real time healthy and pathological patterns in biological signals.

Popular articles

Subscribe to Newsletter