Breaking the 64 spatialized sources barrier
Résumé
Spatialized soundtracks and sound-effects are standard elements of today's video games. However, although 3D audio modeling and content creation tools (e.g., Creative Lab's EAGLE [4]) provide some help to game audio designers, the number of available 3D audio hardware channels remains limited, usually ranging from 16 to 64 in the best case. While one can wonder whether more hardware channels are actually required, it is clear that large numbers of spatialized sources might be needed to render a realistic environment. This problem becomes even more significant if extended sound sources are to be simulated: think of a train for instance, which is far too long to represented as a point source. Since current hardware and APIs implement only point-source models or limited extended source models [2,3,5], a large number of such sources would be required to achieve a realistic effect (view Example1). Finally, 3D-audio channels might also be used for restitution-independent representation of surround music tracks, leaving the generation of the final mix to the audio rendering API but requiring the programmer to assign some of the precious 3D channels to the soundtrack. Also, dynamic allocation schemes currently available in game APIs (e.g. Direct Sound 3D [2]) remain very basic. As a result, game audio designers and developers have to spend a lot of effort to best-map the potentially large number of sources to the limited number of channels. In this paper, we provide some answers to this problem by reviewing and introducing several automatic techniques to achieve efficient hardware mapping of complex dynamic audio scenes in the context of currently available hardware resources.
Fichier principal
ntsingos_gamasutra2.pdf (1.74 Mo)
Télécharger le fichier
screen.gif (13.4 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Format | Figure, Image |
---|
Loading...