Herding Random Behaviors

After playing Precambrian Resonance for a few people and explaining how the arpeggiator was creating a randomness to the output I was asked how that randomness made it sound different from previous playback. This was easy for me to imagine since I had heard it rendered several different ways, but difficult to explain. Therefore I have re-rendered the piece to illustrate how it changes.

This brings up an issue that I have encountered on several occasions. When audio processing creates some sort of randomness in a mix, how can you get exactly what you want? What if after you export the audio there’s some chunk of randomized audio that just doesn’t quite work?

My solution is to render the track that has the random processing on it several times. For Precambrian Resonance 0.2 I rendered the processing eleven times. After that I’ll listen and compare the renders, or if I hear one that I like during the rendering, I’ll just choose it. Ableton Live makes this easy with the “Freeze Track” option that essentially renders the track while allowing you continue making adjustments.

Sometimes it is not that easy. I have encountered situations where version after version of the randomized processing doesn’t quite fit. At this stage what I do is carefully listen to the audio for phrases that have something interesting going on. The next step is to sequence the selected phrases into a complete track, effectively herding the random behaviors into what I’m after. I suppose that this is similar to using genetic algorithms to hybridize the audio in a semi-manual way.

Precambrian Resonance 0.2

Precambrian Resonance

Remember those Precambrian rock noises from North Shore Rocks? Well for this piece I loaded the unprocessed recording of those rocks into a simple sampling plugin, then arpeggiated the sampler randomly within a scale. This created a cloud of stumbling chaotic rhythms that changes every time it is played back in the software.

I listened to this for a long while, fascinated by it, then decided to run it all through the Resonator in Ableton Live. This processor produces a chord of resonant pitches that react to the signal sent to the device; in this case, my falling rock sample. Since the rocks had no discernible pitches, this instantly created a musical bed of sound. I tuned the resonance to a C minor 9 chord and then automated the tuning of a fifth pitch to create a melody. A little bit more fussing about, and this is what I got.

Precambrian Resonance

Unity Status

Technically this post and the last have been more than one sound, so perhaps I should rename the category “one sound or more every day”. Anyway, I just made a rough mix of this musical sketch (not quite a complete piece yet) and thought it could serve as today’s sound.

The image is a partial screen grab of one of the virtual instruments I used. The chordal and melodic tones and the bass are all played and programmed by me, but the rest of it is sampled from an unnamed jazz recording, so although the samples are heavily manipulated this composition is unlikely to go much further than this.

Unity Status

Piano and Kalimba

Every so often I think it might be a good idea to record using acoustic instruments I have lying around my studio. This time I started with a little loop of syncopated piano. On top of that I added a very simple melody with a kalimba, or thumb piano. There’s no processing other than normalization to -3db to give the levels a little boost.

I have dozens of these tiny pieces, and once in a great while they actually get finished as tracks, but the vast majority of them, like this example, sit in dusty folders on backup hard drives. Most of the time that is exactly where they belong, but I do review them occasionally to get ideas or see if there’s anything worth producing.

Piano meets Kalimba

Bit Reduction

This drum loop has been processed by reducing the bit depth and down-sampling the clip until very little of it is reminiscent of it’s original state. As you can see in the image, the waveform has been reduced to a wide pulse that sounds very distorted (you might want to start at low volume). The top of the image represents a short section of the original audio, while the bottom is the processed version.

The bit depth was reduced to two, which allows for four possible positions for the amplitude of the waveform. Two above zero and two below zero. There are no zero crossings that aren’t straight lines, therefore the output sounds very similar to audio that has been badly clipped, but in my ears this sort of distortion has more charm than just clipping the waveform. The only other processing involved is automated pitch shifting from down four octaves up to its original pitch by about seven seconds into the audio. Here is where it sounds closest to it original form. Its stays there until about nine seconds in and then shifts back down minus forty eight semi-tones until it ends after almost twenty seconds.

redux