Tracker Stop Effect

I have been busy today working on four or five separate mixes and managed to finalize two of them, maybe. We’ll see how my ears respond after some rest. Anyway, during the last bit of work I was doing I noticed that one of the processor chains was causing insteresting random sounds whenever I pressed stop in Ableton Live. I decided to capture some of these sounds and see if they might be useful in the track.

Live has a great “resample” feature, but it was no use it this case because the only way to create the sound was by pressing stop and when you do that it stops recording. So I opened up Audacity and attempted to route the output from Live into it. After about five minutes I realized this wasn’t working and turned to the web for an answer. I quickly came across Soundflower (Cycling ’74), a “Free Inter-application Audio Routing Utility for Mac OS X”. This allowed me to route the audio to Audacity as I performed starting and stopping in Live. Here’s an edited version of the results. Warning: I normalized the render and it starts out extremely loud.

Tracker Stop Effect

Cuba, Illinois

Once again, today I set out to experiment for a few minutes and make a new sound using some processing I had yet to use. But like it is prone to happen, as I tweaked and played around a musical piece started to emerge. I sequenced a series of vocal samples then applied a real-time randomizer to the sequence. Second in the chain was a vocoder plugin programmed to produce a Csus chord, followed by a stereo delay. Underneath it I layered a low melody and automated the waveform setting for one of the oscillators to get a digitized static effect. I titled it Cuba, Illinois after a town of about fifteen hundred people in Illinois called Cuba. I’ve never been there, but I like the juxtaposition of the town and state names.

Cuba, Illinois (Rough)

Robot Music

I produced this sound by playing one note in a virtual instrument called “Harmonic Dreamz” which is part of Pluggo by Cycling74. After that I automated random patch changes so that all of the twenty eight parameters included in the Harmonic Dreamz instrument were flying all over the place creating a frenetic passage of electronic mayhem. Then I arpeggiated the note with some slight randomness to the pattern and ended up with this.

To me it sounds as if it could be speech or perhaps singing in a robot language. I recorded several examples of it. Some of the other examples have slight variations and others have significant variations, so I may post some other versions at some point. This recording is in mono with no processing. The output is exactly what the virtual instrument produced given the parameters sent to the device.

Robot Instigator

Hybridized Beat Repeat

In my last post I explained how I rein in random processing behaviors to get the results I’m after. A good processor for randomizing audio is Ableton Live’s Beat Repeat. Beat Repeat effortlessly duplicates the once tedious process of repeating small chunks of a sample to get stuttering effects, but also has parameters to randomize the repetitions in a variety of ways.

For the Rhodes solo in “Six Weeks” I wanted to scramble my performance in some way to match the “broken” drum programming. Beat Repeat was the ticket, but I couldn’t get a complete take that fit well with the rest of the piece. If you look at the image you can see that the solo is made up of fifteen separate regions of audio. These are all abstracted from specific renders of the performance through Beat Repeat. After rendering the audio several times I selected specific phrases and organized them in a way that enhanced the dynamics of the piece, creating a hybrid. Listen to the solo by itself and then to hear it in context play the full track at 2:54.

Six Weeks (solo) – Hybrid Beat Repeat Solo

Six Weeks (full track) – One Day to Save All Life

Herding Random Behaviors

After playing Precambrian Resonance for a few people and explaining how the arpeggiator was creating a randomness to the output I was asked how that randomness made it sound different from previous playback. This was easy for me to imagine since I had heard it rendered several different ways, but difficult to explain. Therefore I have re-rendered the piece to illustrate how it changes.

This brings up an issue that I have encountered on several occasions. When audio processing creates some sort of randomness in a mix, how can you get exactly what you want? What if after you export the audio there’s some chunk of randomized audio that just doesn’t quite work?

My solution is to render the track that has the random processing on it several times. For Precambrian Resonance 0.2 I rendered the processing eleven times. After that I’ll listen and compare the renders, or if I hear one that I like during the rendering, I’ll just choose it. Ableton Live makes this easy with the “Freeze Track” option that essentially renders the track while allowing you continue making adjustments.

Sometimes it is not that easy. I have encountered situations where version after version of the randomized processing doesn’t quite fit. At this stage what I do is carefully listen to the audio for phrases that have something interesting going on. The next step is to sequence the selected phrases into a complete track, effectively herding the random behaviors into what I’m after. I suppose that this is similar to using genetic algorithms to hybridize the audio in a semi-manual way.

Precambrian Resonance 0.2