Real Time Sound Design Performance for Theater

Hello ACB readers! My name is Kyle Vande Slunt and I’m a sound designer living in Minneapolis, MN. I’ve been a big fan of ACB for sometime and John has graciously allowed me to contribute. I look forward to posting more sounds and articles and hearing your feedback. Its great to meet all of you.

Back in November 2008 I was commissioned by the Open Eye Figure Theater in Minneapolis to create sound design for a new work by Michael Sommers entitled “Snowman”. The play was a sound designers dream: a magical fable told through people, puppets, animations, multiple projections, and some “LOST” like magic. The goal was to create an entire world of ambiances, sounds, and transitions that belonged to this snowy world that my have existed in the past or possibly in the far future.

Doubling as the show’s audio engineer, I had to devise a way to trigger (perform) all of these sounds and the recorded musical score for each performance. Normally in smaller theaters, this feat is accomplished by putting everything onto a playable CD or loaded into QLab (a popular Mac based sound program for theater). For Snowman however, I needed to be able to trigger all of these elements and have them be completely independent of each other for layering, mixing, and effects purposes. And in some cases these elements needed to be triggered very quickly.

Snowman Abelton Live Session

The solution: I loaded all of my audio clips (sfx, loops, music, etc) into a highly organized Ableton Live session (see picture) and assigned MIDI notes to trigger the clips. In Live you can only assign one note to a clip, so each clip had to be a different note on the keyboard. So I went through and logically mapped the notes of the keyboard to the sounds and music for the show. I used black keys for music and the white keys for sound effects and ambiances, labeling each key with electoral tape and a description. As you can see in the picture, I used only white and yellow tape. Anything more saturated in hue would have been impossible to read in the dark booth. The white tape is MIDI channel 1 and the yellow tape is MIDI channel 2. (I switched MIDI channels instead of octaves to avoid labeling hassles.) Each channel of audio was then assigned to my BCF-2000 where I had mixing control for every track using multiple fader
banks. The BCF’s knob banks came in handy for sending the audio to
return tracks for real-time effect manipulation.

Snowman Keyboard

Each show felt like a performance where I was jamming away on my weird Snowman keyboard while layering and effecting sounds at the same time. Just for fun I’ve included a small collage of some of the sounds from the show. Enjoy!

Snowman Collage

My Favorite GMS Generated Melody So Far

led_spinning_topI’ve been spending most of my limited spare time practicing with the GMS in preparation for tomorrow nights performance in Minneapolis. While practicing tonight I produced this melody. I was controlling the sequencer with a blinking led, spinning top and randomly looped this sequence of notes.

I’ve since built a track around it with more loops from the GMS, but it sounds good on its own. The nice thing about this technique is that everything I capture is MIDI, so if I get a good melody, but don’t like the sound, it’s easy to change the timbre, tempo, transposition, etc. In other words, beyond being a performance tool, I can use it effectively for composition and idea gathering.

My Favorite GMS Generated Melody So Far

GMS Practice Track Number 3

top_lightsI’ve almost finished with my initial round of tweaking and bug fixing on the GMS, so I’ve finally been able to put a bit more time into actually using the software for its intended purpose. My most recent work with it involves a companion document in Ableton Live that loads a number of virtual instruments into about nine separate MIDI tracks. Ableton provides the external sync via the Apple IAC (Inter-Application Communication) drivers. In turn, the GMS sends MIDI note on and off data to the instruments in Ableton. Using this method I can live loop on various tracks and build a multi-timbral composition in real-time. Here’s an example for a recent practice session.

GMS Practice Practice Track Number 3

External Sync Feature Added to the GMS

With some expert help from Grant Muller I have successfully added the capability of synchronization with an external MIDI signal to the GMS. This feature opens up vast possibilities for performance and collaboration with the tool. To test the feature I sent external sync from Ableton Live to the GMS, which in turn routed note information back through the IAC drivers into Ableton to drive a VST FM synth. I started by live looping a few phrases from the sequencer including a bass line, mid-range arpeggio, and some heavily delayed FM clav, then put it together with a recycled beat into a two minute micro-track. Everything heard, except the drums, are notes output from the GMS via video stimulus.

GMS External Sync Test

Chromatic Currents Part II

This second part to “Chromatic Currents” was produced with the GMS by using a string of lights placed into a large glass vase. I moved the camera around the vase to direct the flow of musical phrases with one hand while I adjusted transposition and note duration settings in the sequencer with my right.

You might notice that the video stimulus does not resemble lights in a vase. This is because I applied a negative filter to the video after capturing the performance. Once again I used a pleasant pentatonic scale interspersed with rare dissonant notes and probability distributions in the note durations to give it an eerie awkwardness.

GMS: Chromatic Currents Part II from Unearthed Music on Vimeo.