Distributed Processing of Audio Signals (DPAS)

Industry-sponsored project that aims to develop new distributed processing approaches and new computational methods for determining the acoustic room response.
The active control of sound fields is increasingly important. It is a natural component of immersive audio-visual environments that can be used either for the transfer of a remote scenario, possibly augmented with additional information, or to create artificial scenarios for education or entertainment. It can also be used for tasks such as the delivery of personalized audio in a car without the use of headphones, and the elimination of reverberation in the delivery of audio to an audience.

To control the sound field, we must know the relation between the sound signal at any location in a space and the signals generated by loudspeakers. It is generally considered reasonable to assume that the response to the loudspeaker is linear. The response can, therefore, be described by Green's function (the room impulse response). Linearity by itself does not make the problem of characterizing the acoustic environment tractable, however. Sound is reflected from walls and objects in the room and this, together with the very large number of degrees of freedom of a soundfield, makes it difficult to ascertain what the room response is. We can attempt to estimate the sound field by modeling the room or by making measurements with microphones. Even if the room is described as a simple box of known dimensions and with known reflection coefficients, the computational complexity of finding the room response is extremely high and the assumptions about the room are generally incorrect. The alternative approach is not much better: in practice the number of microphones is always insufficient to estimate the Green's function in sufficient detail. Measurements can be made somewhat more effective by assuming that the sound field can be described locally with a relatively small number of plane waves. This leads to a description of the sound field that increases in error with the distance from the microphones. It is likely that descriptions of the Green's function over a larger area require both measurements and effective modeling.

The accurate estimation of fields, including sound fields, often requires a large number of measurements. It is desirable to have a scalable and robust paradigm for the measurement and subsequent processing. Conventional approaches, where the measured data are sent to a central location and then processed centrally, do not scale well and are not robust. A solution to this problem is to perform distributed processing and only send processed information from the sensor network to the central location. For the distributed characterization of sound fields, or properties thereof, this requires the distributed implementation of a set of signal processing operations. While distributed processing technologies have developed rapidly in recent years, many operations are still problematic, for example because they require many communication cycles, or because they require synchronicity between all sensors.

In this project we aim to improve the estimation and of the Green's function by integrating and extending existing methods and to develop distributed methods for relevant signal processing algorithms. We will aim to develop methods that have relatively low computational effort, making them useful for practical applications. We will build on the fast methods to compute the room response and on asynchronous distributed processing algorithms that we developed in recent yearsy. The end result will be a significant step towards practical methods to control the sound field in complex acoustic environments.

Sound field in a room

Project data

Researchers: Bastiaan Kleijn, Richard Heusdens, Thomas Sherson, Jia Yan, Wangyang Yu
Starting date: September 2014
Closing date: September 2018
Funding: 650 kE; related to group 650 kE
Sponsor: Huawei
Users: Huawei
Contact: Bastiaan Kleijn