One of the requirements for a fully immersive gaming experience is the correct and visually coherent reproduction of location of sounding objects. Limitations of human spatial resolution in this regard have been known for decades and are commonly taken into account when designing audio reproduction systems. However, mechanisms responsible for the perception of static and dynamic sound sources are not the same and less attention has been devoted to the problem of perception of dynamically relocated sound sources, omnipresent in video games. This paper explores the human ability to follow moving sound sources when presented as First and Higher Order Ambisonic renderings over headphones and aims at finding the optimal, psychoacoustically justified parameters that could significantly reduce computational requirements of audio engines.
|Title of host publication||Audio Engineering Society 41st International Conference|
|Subtitle of host publication||Audio for games|
|Place of Publication||London|
|Publication status||Published - 2 Feb 2011|