Falling Objects Synchronized to Produce Rhythm

Gravité from Renaud Hallée on Vimeo.

It has been around for a while, but I just came across this very clever piece by Montréal based artist, Renaud Hallée. His composition uses video and sound from falling objects edited together to produce some nice rhythms with a few unexpected twists. Hint: it’s not all tennis balls.

Experimental Music Mondays Part 2

This Monday, March 29, 2010 is the second installment of Experimental Music Mondays curated by John Keston. The line-up this time includes Pawlic (Jesse Pollock) and Terr the Om (Nathan Brende).

Sandwiched between them is Ostracon. This is the name I’m using for the plural version of Ostraka. This instance involves the usual characters, John Keston (aka Ostraka), Graham O’Brien, and the addition of Oliver Grudem who will be interjecting his enigmatic imagery as an input source for the generative musical phrases produced by the GMS.

The venue is the Kitty Cat Klub, 315 14th Avenue SE, Minneapolis, Minnesota. The music starts at 9:00pm.

Four Oscillator Drone Produced with the WSG


What good is a Weird Sound Generator if you’re not using it to make weird sounds? Sometimes it is nice to just hold it on your lap and stroke it gently. That aside, it’s quiet useful once you plug it in and start twiddling the knobs. Here’s a piece I created by tuning the each of the four oscillators on the WSG and then fiddled with the filters. At the same time I made some adjustments to a phaser that I was running it through in Ableton Live and topped it off with ping pong delay.

Four Oscillator Drone

Experimental Music Mondays Call for Entries

I am curating a series of Experimental Music events hosted by the Kitty Cat Klub in Minneapolis, Minnesota. The first installment is Monday, March 1st, 2010. Subsequent installments are scheduled for the last Monday of every month. For the first show we have three performances.

Ostraka (myself) with Graham O’Brien on drums will be performing using the GMS. Terr the Om (Nathan Brende) will also be performing his distinct breed of electronic music, melding the output from his circuit bent toys with looping and real-time arranging in Ableton Live. Thirdly, Dialsystem consisting of brothers, Graham and Casey O’Brien will likely mesmerize listeners with their ethereal mix of bass, drums, and electronics. Music starts at 9:00pm.

I’m in the process of booking the upcoming events, so if you are a performer of experimental music and would like to get involved, please send your name, artist name, contact information, links to a biography, and links to audio examples to emm [ at ] audiocookbook [ dot ] org.

David Cope’s Emily Howell

I’ve just read a fascinating article about composer, David Cope, who is known for creating music in the style of Bach, Mozart, and others with software he developed called Emmy. I first heard David Cope’s work on Radiolab, and was intrigued by his approach. The article, Triumph of the Cyborg Composer, discusses his latest application titled Emily Howell. Cope is using the computer in a more collaborative way to compose music in his own style with the help of his program. A couple of audio examples within the article illustrate the musical results.

Cope has received a lot of criticism regarding his work, including statements that his music lacks soul because it was written by a computer. But was it really written by a computer? I think a better term is generated. Cope wrote the software, so I would argue that the music generated by the software was ultimately written by the software developer. In this case Cope himself. In other instances I might argue that the music was created by the user of the software tool, rather than the developer of the software. It comes down to who is at the controls. What decisions are being made, and by whom, or perhaps what?

Since I’ve developed and am currently using software to perform and record generative music, I am curious about your opinions. You may have heard pieces on this site generated by the GMS. Perhaps you listened to the excerpts in the article. What do you think? Does music generated by computers lack soul? Does it diminish the human, communicative qualities contained in the work? Or, are we using computers simply as tools? Perhaps, as computers and software evolve we might begin to collaborate artistically with them rather than just use them slavishly. Based on Cope’s work and others, I believe that we are closer than we think to this becoming a reality.