IoSR Blog : 15 May 2015
Thoughts from the 138th Audio Engineering Society Convention, Warsaw
The Audio Engineering Society (AES) is one of the leading professional organisations for audio engineers, bringing together practitioners and researchers across a huge range of fields from all aspects of professional recording, equipment manufacture, and acoustics research. Aside from the publications and regular meetings—both in local sections and larger international conferences, the society hosts a twice-yearly convention, alternating between European and American host cities. These four-day meetings include a spectacular range of technical and practical presentations, workshops, and tutorials, among other special events.
The most recent event was in Warsaw between the 7th and 10th of May; I presented two papers describing output from the S3A project. I thought I’d write a short post summarizing some of what I saw at the convention.
Warsaw was a great location for the convention; it’s a really interesting city, and the weather was mostly good too. This photo shows the impressive Multimedia Fountain Park.
The first paper that I presented was entitled “Elicitation of the differences between real and reproduced audio” [1]. The paper described an experiment in which we asked two groups of listeners to move freely between a live performance and a simultaneous nine-channel loudspeaker reproduction of various musical groups, and to write down their thoughts on the differences between the two listening experiences. This provided data for a subsequent group discussion to determine a set of categories that qualify the difference in experience. It’s quite difficult to directly compare real and reproduced audio, so this type of experiment has not been widely reported on; however, I was happy to see that there was interest in this work: from people reporting on similar experiments that they’d seen, to others considering integrating some of our elicitation methods into their own setups.
Interestingly, during a convention event a few days before my talk, the audience were given the chance to compare a loudspeaker reproduction with a live performance of some Chopin songs for voice and piano. Andrew Lipinski (Lipinski Sound) had made high-fidelity recordings and set up a custom-built 3D playback system, and the audience listened to alternating recordings and live performances given by the same musicians. The sound produced by the reproduction was very natural; I found that if I closed my eyes I could almost feel transported to the concert hall. However, there were pronounced differences between the reproduction and the live performance, not least because the concert hall in which the recording was made was different to the concert hall in which the live performance took place.
Attribute elicitation was also a common theme in presentations, which I found interesting as it’s the topic of my current work, as well as a lot of the work in my PhD thesis [2]. As well as our paper described above, we heard about the development of a “sound wheel” (analogous to the well-known “wine aroma wheel” or similar tools from other sensory sciences), containing the important perceptual attributes that characterize reproduced sound [3], and also about eliciting the terms used to describe the sound of analogue compressors [4].
The second talk that I gave [5] described the large-scale recording session that we performed at Surrey towards the beginning of the S3A project. In the session, we used lots of microphone techniques intended for reproduction using methods that included height channels. How to capture signals for reproduction over elevated loudspeakers was another theme that was mentioned during many sessions at the convention. There were numerous demonstrations of with-height 3D audio systems as well as talks proposing new capture methods [6].
The University of Surrey was well represented at the convention, with two presentations given by Phil Coleman (who also works on the S3A project), relating to creating audio objects from microphone array recordings [7] (including those made in the large recording session mentioned above) and parameterising the ambience characteristics of a room [8]. (BBC R&D) also talked about his work on developing semantic audio production tools for radio [9].
It was good to see many past members of the Institute of Sound Recording during the convention: Francis Rumsey gave an organ recital; Hyunkook Lee led a workshop on the psychoacoustics of 3D sound recording; and Slawomir Zielinski presented a talk on unification of assessment methods for different audio-visual media [10]. It was also encouraging to see current undergraduate students getting involved with the technical programme as well as various student events.
Alongside the papers mentioned above, there was a huge range of really interesting and relevant research presented. The convention provided a great opportunity to get a snapshot of the research going on in different groups around the world and to catch up with colleagues from other institutions. I’m looking forward to the next event!
References
by Jon Francombe