As promised, here is the third set from the DGK performance at the Red Stag on February 19, 2011. (Photo by Dane Messall of DGK at the Slam Factory)
DGK at the Red Stag Set 3
As promised, here is the third set from the DGK performance at the Red Stag on February 19, 2011. (Photo by Dane Messall of DGK at the Slam Factory)
DGK at the Red Stag Set 3
The June 28 installment of Experimental Music Mondays starts out with music from EMM regular Terr the Om (Nathan Brende). Terr the Om combines circuit bending and laptop mangling to create glistening, quirky, and bit-crushed on-the-spot compositions with a break beat sensibility.
Next is Siamese Bug made up of drummer TIm Glenn and guitarist Jeremy Ylversaker. Tim Glenn (HeatdeatH, Squidfist) and Jeremy Ylvisaker (Alpha Consumer, Dosh, Andrew Bird) have played together in Fog and Ourmine, Individually they’ve performed everywhere from Sydney Opera Hall to your nightmares. Expect to hear the sounds from contact mics on torn cymbals and vintage transistor interference through guitar pickups and pedal arrays.
The final act of the evening is by noise masters Juhyo. “JUHYO is a collaboration between Minneapolis artists Brian Kopish (Surrounded) and Bill Henson (Oblong Box). Together they create horrifyingly beautiful soundscapes of pure noise. Armed with an array of homemade oscillators, delay units, resonators, samplers and sheer volume; aimed with composition, discipline and conscious, focused intent; JUHYO exists as an entity of creative expression, freedom, subtle beauty and eardrum bleeding power.”
Today I had a student ask how to make old science fiction machinery sounds. The sound he wanted was for a robot starting up then slowly shutting down. We tried a few different things and finally settled on using Reason to create a random sequence of notes.
I started with a chromatic scale and then randomized it using the change events function. We played it back in Subtractor and messed with the patch until it sounded like what he was going for. The tricky part was pitch bending the sequence. Reason 3.5 does not support tempo automation, so although we could use the pitch bend wheel the notes were all at the same speed. To get around this we exported the audio and loaded it into NN-XT as a sample, then applied automation to the pitch wheel with a twenty four semitone range.
Robot Shutdown
Pluggo includes an interesting device called Vocalese. Basically, Vocalese is a virtual instrument made up of a collection of phonetic samples. If you’re clever, and very patient, you can paste these samples together to create words, thereby synthesizing speech. I wasn’t really interested in doing that, nor am I patient enough, but I liked the idea of using the instrument to drive a vocoder. In order to do this I created a MIDI sequence that played each one of the phonetic samples in the instrument. Then I used a plugin to randomize the notes in realtime, so the sequence is never the same. Then I directed the output into a vocoder plugin, followed by delay and reverb for atmosphere.
Vocalese Vocoder
No, this is not the answer to a “before and after” puzzle in an episode of Wheel of Fortune. They are two of many Photoshop filters. These sound files are the rejects. Although not bad, I did not find the effect these filters had on my electric piano passage as interesting as the rest of my experiments. They also sound very similar to each other, which might not be the case using different sounds, or with other settings. Anyway, this is it for my first round of using Photoshop filters to process audio. Next time I plan on trying this with some more natural, acoustic sounds.
Crystalized Electric Piano
Ocean Ripple Electric Piano