Personalized Auditory Scene Modification to Assist Hearing Impaired People (HAcue)

Themes: Health and Wellbeing

Algorithms to personalize the presented auditory scene for improved speech intelligibility and sound localization hearing impaired users

Good hearing is important for people to take part of society. Being hearing impaired comes with many daily life problems ranging in level from annoying and feeling isolated (e.g., consider difficulties with speech intelligibility in acoustic noisy situations) to dangerous (consider the inability to localize sound sources in traffic). Moreover, hearingimpaired people are often less confident in practical situations, need more assistance and are known to have a (on average) worse quality of life.

The underlying reason is that hearing-impaired people suffer from a) the inability to understand speech in acoustic challenging situations, and, b) the inability to correctly localize sound sources. Although current hearing aids are a great help, they come with two very important unsolved challenges that prevent them to become fully successful in practice in terms of speech understanding and source localization.

The first challenge relates to the fact that state-of-the-art noise reduction algorithms for hearing aids critically rely on many parameters that describe the actual acoustic scene. In practice, these parameters are unknown, while knowing them is very important to be able to harvest the hearing aids’ full potential.

Secondly, many of the algorithms for noise reduction and spatial cue preservation are developed for normal hearing people. However, to make the individual hearing impaired person really benefit, the information might need to be presented in a different (acoustic) form compared to normal hearing people.

These two challenges are strongly connected and come together in this proposal: Adjusting the speech to the (hearing impaired) user requires to know the parameters that can describe the scene, but also requires to know how the presented audio can be personalized such that speech understanding and sound localization can be optimized.

The goal of this project is therefore to solve these two challenges jointly and develop an algorithm to personalize the presented auditory scene for improved speech intelligibility and sound localization for hearing impaired users.

To achieve this goal, we collaborate within a consortium of experts on signal processing for hearing aids (TU Delft), experts on auditory perception (Oldenburg University), and leading companies in the field of audio, speech and hearing aid processing (Bang and Olufsen, GnResound and Bosch). More concrete, we will develop a framework based on confirmatory factor analysis that allows to jointly estimate all parameters that describe the acoustic scene (e.g., the power spectral densities and acoustic transfer functions of all individual sources, the microphone self noise, etc.). In addition, we will investigate how we can optimize the presented scene for the hearing aid user, such that inaudible spatial cues are transformed into a different audible spatial cue and artificial (perceptual inaudible) spatial diversity is introduced, such that localization of sources becomes possible, while intelligibility is optimized.

In this way the hearing impaired user can fully benefit from an improved intelligibility and sound localization.

Project data

Researchers: Richard Hendriks, Richard Heusdens, Giovanni Bologni, Jordi de Vries, Changheng Li
Starting date: March 2022
Closing date: March 2027
Funding: 500 kE; related to group 500 kE
Sponsor: NWO-TTW
Partners: Prof. S. van de Par (Oldenburg University)
Users: GNReSound, Alpine BV, Bang & Olufsen A/S (Denmark), Bosch Security Systems B.V.
Contact: Richard Hendriks

Publication list