Testing the newest version of Photosounder gave me an opportunity to apply some Photoshop filters to sound that I had not yet tried. I experimented with halftone patterns, lens blur, pixelated color halftones, the patchwork filter, and the smudge tool.
One of the more interesting filters ended up being the plaster effect under distort. The plaster effect has a relief setting to give the image a 3D look, but also smooths the insides of areas within the image. This eliminated the noise between the speech, but also made the dialogue virtually unintelligible.
Michel Rouzic has just released version 1.4 of Photosounder that includes a new “lossless” mode so the output is identical to the input. Previously there was some loss of resolution importing the audio. From Michel:
Basically the lossless mode in question is a sort of 2D time-frequency filtering mode, kind of like some other programs like Audition 3 do by letting you airbrush on a spectrogram, that’s the idea basically. The difference here is that besides the brushes that Photosounder has, you can export the image to Photoshop and do some very precise filtering, for example making a sound feature disappear by hand, enhancing parts of a sound, subtracting to sound as I once did by making the difference between a song’s spectrogram and its instrumental version’s spectrogram to isolate the vocals, experiment with contrast, curves, levels, sharpening, various effects (I’m pretty sure you could for example try the glowing edges again and get a different sounding result).
To illustrate the lossless mode, here’s a segment of dialogue from a 1972 social commentary film in the public domain presented with the lossless mode on and again with it off. The lossless mode sounds exactly like the original waveform, while without the lossless mode the audio lacks resolution.
Pluggo includes an interesting device called Vocalese. Basically, Vocalese is a virtual instrument made up of a collection of phonetic samples. If you’re clever, and very patient, you can paste these samples together to create words, thereby synthesizing speech. I wasn’t really interested in doing that, nor am I patient enough, but I liked the idea of using the instrument to drive a vocoder. In order to do this I created a MIDI sequence that played each one of the phonetic samples in the instrument. Then I used a plugin to randomize the notes in realtime, so the sequence is never the same. Then I directed the output into a vocoder plugin, followed by delay and reverb for atmosphere.
I recorded these wind chimes that were hanging above the door of my friend Alice and Damon’s house recently. It was early evening and the traffic was light for a Saturday night in Northeast Minneapolis, but you can still hear some motor vehicles nearby in the background. I really like the tuning of these chimes, so I may go back a record them properly soon.
After Joel Ryan and Keir Neuringer’s appearance at the Ted Mann Concert Hall on February 21, 2009 for the Spark Festival, I had an opportunity talk with Keir during the night life event at the Bedlam Theatre. I told him all about Audio Cookbook and he agreed to posting a segment of his performance here.
The performance consisted of two improvisational pieces with Keir on saxophone and Joel Ryan processing the sound in real-time. The sheer breadth of textures and mood produced by the duet made it difficult to decide what to include in this entry.
The first piece was 23:55 minutes long, while the second was 10:15. Here’s a fifty-seven second segment from the first piece that illustrates some of Keir’s unorthodox techniques on the saxophone as well as Joel Ryan’s approach to real-time audio manipulation.