posted on 2020-04-29, 12:52authored byAtau Tanaka, Balandino Di Donato, Michael Zbyszynski, Geert Roks
We present a system that allows users to try different ways to train neural networks and temporal modelling to asso- ciate gestures with time-varying sound. We created a soft- ware framework for this and evaluated it in a workshop- based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to de- sign gestures for performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle sens- ing (EMG) device. We presented the user with two classical techniques from the literature, Static Position regression and Hidden Markov based temporal modelling, and pro- pose a new technique for capturing gesture anchor points on the fly as training data for neural network based regression, called Windowed Regression. Our results show trade- offs between accurate, predictable reproduction of source sounds and exploration of the gesture-sound space. Several users were attracted to our windowed regression technique. This paper will be of interest to musicians engaged in going from sound design to gesture design and offers a workflow for interactive machine learning.
Funding
We acknowledge our funding body H2020-EU.1.1. - EXCELLENT SCIENCE - European Research Council (ERC) - ERC-2017-Proof of Concept (PoC) - Project name: BioMusic - Project ID: 789825.
History
Citation
Tanaka, Atau; Di Donato, Balandino; Zbyszynski, Michael and Roks, Geert. 2019. 'Designing Gestures for Continuous Sonic Interaction'. In: The International Conference on New Interfaces for Musical Expression. Porto Alegre, Brazil 3-6 June 2019.
Source
The International Conference on New Interfaces for Musical Expression
Version
VoR (Version of Record)
Published in
International Conference on New Interfaces for Musical Expression