Automated Auto Pan

As a producer, a technique I have found that is an effective way to develop the dynamics of a performance is by adding expression through automated processing. In this phrase of synth from a composition that I’m working on I have applied automation to add an expressive quality to the recording.

I have always been fascinated by the Doppler effect as it is mechanically applied to sound through the use of Leslie speaker cabinets. I own a Leslie cabinet that I had modified so that I was able to run instruments through the amplifier, other than Hammond organ, and control the speed with a foot switch. My goal was to play my Rhodes through a Leslie, and this is something I did during live performances for years to come.

My favorite characteristic of the Leslie is the slowing down and speeding up of the motors that control the speaker rotation. This can be simulated quite well with plugins or virtual instruments such as the Native Instruments B4. In this example, rather than use Leslie simulation, I opted to simply automate the “Rate” parameter in Live’s Auto Pan effect. Leslie simulators often add other characteristics like motor noise, filtering and distortion, but I wanted to keep the signal relatively clean while still getting a speeding up and slowing down expressive quality to the instrument. To get the full effect of the automated panning, listen with headphones firmly planted on ears.

Automated Auto Pan

Zhiguly

I stumbled across this gem, that was recorded during a jam session between myself on Rhodes, Nils Westdal on bass, and Kyle Herskovitz (DJ Zenrock) on turntables. This session happened more than four years ago on March 11, 2004.

I couldn’t stop myself from messing about with it until I got this simple 0:22 second arrangement. I automated a filter on the Rhodes as well as running it through an amp modeling plugin and then topped it off with a touch of ping pong delay.

The main thing that attracted me to this archive was the skillful turntablism of Mr. Herskovitz. I have been fortunate to work with him off an on for more than a decade. Kyle is the most talented, creative and dedicated DJ and turntablist I have ever heard or worked with, so I have included a solo snippet of his track from this session so you can hear some of his magic on its own.

By the way, the photo is from a video installation we produced. It was performed during a show at the convention center in Minneapolis on a co-bill with Keston and Westdal and Zenrock last year.

Zhiguly

Zhiguly Scratch

Hybridized Beat Repeat

In my last post I explained how I rein in random processing behaviors to get the results I’m after. A good processor for randomizing audio is Ableton Live’s Beat Repeat. Beat Repeat effortlessly duplicates the once tedious process of repeating small chunks of a sample to get stuttering effects, but also has parameters to randomize the repetitions in a variety of ways.

For the Rhodes solo in “Six Weeks” I wanted to scramble my performance in some way to match the “broken” drum programming. Beat Repeat was the ticket, but I couldn’t get a complete take that fit well with the rest of the piece. If you look at the image you can see that the solo is made up of fifteen separate regions of audio. These are all abstracted from specific renders of the performance through Beat Repeat. After rendering the audio several times I selected specific phrases and organized them in a way that enhanced the dynamics of the piece, creating a hybrid. Listen to the solo by itself and then to hear it in context play the full track at 2:54.

Six Weeks (solo) – Hybrid Beat Repeat Solo

Six Weeks (full track) – One Day to Save All Life

Herding Random Behaviors

After playing Precambrian Resonance for a few people and explaining how the arpeggiator was creating a randomness to the output I was asked how that randomness made it sound different from previous playback. This was easy for me to imagine since I had heard it rendered several different ways, but difficult to explain. Therefore I have re-rendered the piece to illustrate how it changes.

This brings up an issue that I have encountered on several occasions. When audio processing creates some sort of randomness in a mix, how can you get exactly what you want? What if after you export the audio there’s some chunk of randomized audio that just doesn’t quite work?

My solution is to render the track that has the random processing on it several times. For Precambrian Resonance 0.2 I rendered the processing eleven times. After that I’ll listen and compare the renders, or if I hear one that I like during the rendering, I’ll just choose it. Ableton Live makes this easy with the “Freeze Track” option that essentially renders the track while allowing you continue making adjustments.

Sometimes it is not that easy. I have encountered situations where version after version of the randomized processing doesn’t quite fit. At this stage what I do is carefully listen to the audio for phrases that have something interesting going on. The next step is to sequence the selected phrases into a complete track, effectively herding the random behaviors into what I’m after. I suppose that this is similar to using genetic algorithms to hybridize the audio in a semi-manual way.

Precambrian Resonance 0.2

Bit Reduction

This drum loop has been processed by reducing the bit depth and down-sampling the clip until very little of it is reminiscent of it’s original state. As you can see in the image, the waveform has been reduced to a wide pulse that sounds very distorted (you might want to start at low volume). The top of the image represents a short section of the original audio, while the bottom is the processed version.

The bit depth was reduced to two, which allows for four possible positions for the amplitude of the waveform. Two above zero and two below zero. There are no zero crossings that aren’t straight lines, therefore the output sounds very similar to audio that has been badly clipped, but in my ears this sort of distortion has more charm than just clipping the waveform. The only other processing involved is automated pitch shifting from down four octaves up to its original pitch by about seven seconds into the audio. Here is where it sounds closest to it original form. Its stays there until about nine seconds in and then shifts back down minus forty eight semi-tones until it ends after almost twenty seconds.

redux