posted on 2020-05-26, 14:43authored byBalandino Di Donato, Jamie Bullock
Sound spatialisation is an important component in interactive performances as well as in game audio and virtual or mixed reality
systems. HCI practices are increasingly focused on creating a
natural user experience and embodied interaction through gestural control. Body movements that coincide with sounds consist
of both performed ‘sound producing‘ gestures, and ancillary and
communicative movements. Thus, different gestural typologies
may relate to the same audio source. Furthermore, gestures may
depend on the context in which they have been expressed; in other
words, they can carry different semantic or semiotic meanings in
relationship to the situation and environment or reality in which
they have been enacted. In order to explore these research themes,
we are developing gSPAT: a software and hardware system able
to drive live sound spatialisation for interactive audio performance
using gestural control based on human-meaningful gesture-sound
relationships. The ultimate aim is to provide a highly natural and
musically expressive sound spatialisation experience for the performer. Here we describe three experiments conducted to explore
the possible directions for the future of gSPAT’s development. The
tests employ a range of practice-based and ethnographic research
methods to establish applicability, naturalness and usability across
a range of approaches to the interaction design of the system.
History
Citation
Student Think Tank at The 21th International Conference on Auditory Display (ICAD–2015) July 7, 2015, Graz, Austria
Source
International Conference on Auditory DisplayStudent Think Tank at The 21th International Conference on Auditory Display (ICAD–2015) July 7, 2015, Graz, Austria
Published in
Proceedings of the International Conference on Auditory Display, Student ThinkThank