Blitzen Machine

Here’s a snippet from a track I started working on today. I began by using the same techniques I described in Robot Music and in Robot Conspiracy, only I was more deliberate about the patch changes so that they lock in with the tempo a little bit more nicely.

Another technique I used was to cut up individual slices of my recordings and load the samples into Ableton’s virtual drum machine, Impulse, so I could program patterns of the samples into a variety of MIDI clips. I also used a couple of very short sections of AM Radio Static diffused with a healthy amount of reverb.

Blitzen Machine

Guitar Chord

A rarely tapped resource for me are clips found in the sound file folders of Ableton Live sets I use for performances. My group Keston and Westdal use two laptops running Live synchronized using a MIDI network. We usually play instruments during our performances and use the laptops for live looping and triggering loops and “scenes” as we construct the arrangements during the show. Our drummer gets a click so we can we can leave out or bring in sound from the laptops as we like. This way we can have purely live instrumentation intermingled with sequenced and live looped audio. It’s a bit of a learning curve to perform this way, but very liberating once you get it down.

This short sample of a guitar chord was played by my good friend Jason Cameron based in Seattle. While jamming together last June, 2008 I captured a few of his phrases in one of my Live sets, and came across it today while browsing through the sound file folders, looking for something to post. I dumped it back in Live, resisted the urge to reverse it, and added distortion, delay and reverb for a little texture.

Guitar Chord

Piano Mallet Beat

So, what do I do with all these samples of different mallets on piano strings, and other areas of the instrument? How about putting them all into a drum machine? Better yet, a virtual drum machine, like Ableton’s Impulse. In this example I have selected some percussive sounds as well as some tonal samples and tuned everything to work together, then created a simple beat with the samples. Key parameters in setting up Impulse included, pitch, decay, filter, frequency, resonance and mode.

Piano Mallet Beat

Hybridized Beat Repeat

In my last post I explained how I rein in random processing behaviors to get the results I’m after. A good processor for randomizing audio is Ableton Live’s Beat Repeat. Beat Repeat effortlessly duplicates the once tedious process of repeating small chunks of a sample to get stuttering effects, but also has parameters to randomize the repetitions in a variety of ways.

For the Rhodes solo in “Six Weeks” I wanted to scramble my performance in some way to match the “broken” drum programming. Beat Repeat was the ticket, but I couldn’t get a complete take that fit well with the rest of the piece. If you look at the image you can see that the solo is made up of fifteen separate regions of audio. These are all abstracted from specific renders of the performance through Beat Repeat. After rendering the audio several times I selected specific phrases and organized them in a way that enhanced the dynamics of the piece, creating a hybrid. Listen to the solo by itself and then to hear it in context play the full track at 2:54.

Six Weeks (solo) – Hybrid Beat Repeat Solo

Six Weeks (full track) – One Day to Save All Life

Herding Random Behaviors

After playing Precambrian Resonance for a few people and explaining how the arpeggiator was creating a randomness to the output I was asked how that randomness made it sound different from previous playback. This was easy for me to imagine since I had heard it rendered several different ways, but difficult to explain. Therefore I have re-rendered the piece to illustrate how it changes.

This brings up an issue that I have encountered on several occasions. When audio processing creates some sort of randomness in a mix, how can you get exactly what you want? What if after you export the audio there’s some chunk of randomized audio that just doesn’t quite work?

My solution is to render the track that has the random processing on it several times. For Precambrian Resonance 0.2 I rendered the processing eleven times. After that I’ll listen and compare the renders, or if I hear one that I like during the rendering, I’ll just choose it. Ableton Live makes this easy with the “Freeze Track” option that essentially renders the track while allowing you continue making adjustments.

Sometimes it is not that easy. I have encountered situations where version after version of the randomized processing doesn’t quite fit. At this stage what I do is carefully listen to the audio for phrases that have something interesting going on. The next step is to sequence the selected phrases into a complete track, effectively herding the random behaviors into what I’m after. I suppose that this is similar to using genetic algorithms to hybridize the audio in a semi-manual way.

Precambrian Resonance 0.2