posted on 2020-05-26, 11:45authored byMichael Zbyszynski, Balandino Di Donato, Atau Tanaka
This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space.
Funding
The research leading to these results has received funding from the European Re-search Council (ERC) under the European Unions Horizon 2020 research and inno-vation programme (Grant agreement No. 789825)
History
Citation
14th International Symposium on Computer Music Multidisciplinary Research (CMMR). Marseille, France 14-18 October 2019
Source
14th International Symposium on Computer Music Multidisciplinary Research (CMMR)
Version
AM (Accepted Manuscript)
Published in
14th International Symposium on Computer Music Multidisciplinary Research