Artigo Revisado por pares

Loops, Games and Playful Things

2013; Taylor & Francis; Volume: 32; Issue: 1 Linguagem: Inglês

10.1080/07494467.2013.774122

ISSN

1477-2256

Autores

Andrea Cera,

Tópico(s)

Interactive and Immersive Displays

Resumo

Abstract This article focuses on the creation of an audio engine for Urban Musical Game, a multi-partner project led by Institut de Recherche et Coordination Acoustique/Musique. Product design, informatics, engineering and musical composition work in close interaction to harmonize several issues: wireless technology embedded in everyday objects, real-time gesture analysis and recognition, content-based audio processing, modular MaxMSP patching, game design, and experimentation on listening modes. Keywords: Audio GamingGenerative MusicPopular Music Acknowledgments I thank the IRCAM's Real-Time Musical Interaction team and in particular N. Schnell and N. Rasamimana for their precious assistance during the writing of this article. Notes I started my work for UMG by reading an observation by L. Wittgenstein about how different games can be considered under the light of the concept of family resemblance (Wittgenstein, Citation1953). I always have been interested in shedding a similar light on the categorization of today's and yesterday's commercial music genres (Fabbri, Citation1999). An old electric guitar tuned a fifth lower, a few acoustic guitars, a open-back banjo, an electric upright bass, a fretless bass, several kinds of small percussion instruments, a ukulele, a few old synthesizers from the '80s, an Italian 200 watt amplifier from the 70s. I used several microphones, some of them very old, along with two Line 6 PODs for some direct recording of the electric guitars and basses. In addition, Julien Bloit, from the IRCAM team, recorded several drum tracks with contact microphones, that I used to trigger percussion samples, mainly from the Vienna Instruments Library. Additional parts were directly created in Cubase with different virtual studio technology plugins, as well as MaxMSP patches connected via a Rewire connection. Benjamin Miller, from the IRCAM team recorded a few Parisan soundscapes, used for the ‘Ambient’ tracks. Gesture analysis is based on: 1. Low-level features, directly extracted from gesture data (energy, kick, fall, spin, etc.)—mid-level data fusion, integrating several gesture dimensions together and considering data over time (dribbling tempo estimation, angle calculation with Kalman filtering, etc.)2. Higher level ‘gesture following’, used to define more complex playing techniques based on gesture data temporal patterns (rolling, dancing, dribbling, shaking, etc.)3. This gesture information is sent to the audio engines through open sound control. I used different kinds of visualization, such as the MuBu grids, simulations in a multitrack digital audio workstation, and SDIF markers that have been useful to imagine different configurations. Unfortunately the timbral aspects and the effect of micro-variations of the groove can hardly be visualized, so that a preset can be validated only by listening.

Referência(s)
Altmetric
PlumX