In my previous example of audio created with my gestural music sequencer, that I’m tentatively naming GMS, I presented a pattern of sampled Rhodes notes in a chromatic scale. One of the functions I’ve built into the application is the capability of switching scales. Currently the available scales are major, minor, pentatonic minor, and chromatic. Here’s an example of the application producing notes in a minor scale. One thing you may notice is the dynamic range. By gesturing lower within the Y axis the notes get quieter, while gesturing near the top of the screen makes notes that are louder.
A big thanks goes out to Jason Striegel and Nick Watts for inviting us to perform at Make: Day at the Science Museum of Minnesota. I performed with my group Keston and Westdal. Other performers included Savage Aural Hotbed and Tim Kaiser. Besides the performances there were some excellent presenters. Myself, Nils Westdal, our drummer Graham O’Brien, and our intern, Ben Siegel greeted visitors at our table. We presented bits and pieces that Graham used with his drums including sticks, pencils, and a chain. We also showed materials from Unearthed Music, Audio Cookbook, and I revealed a gestural music sequencer (GMS) I developed in Processing.
I was really excited to see the reaction to the sequencer. The application samples video and displays it inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As visitors moved, danced, or gestured in front of the camera notes were generated based on a predetermined scale. Here’s a short sample of what the GMS can produce. I’ll post more about this soon.
I created this sequence of randomized notes using Processing.org with the RWMidi library installed. The notes were randomly selected from a C minor scale. I also randomized the occurrence of the notes to eliminate any rhythmic qualities. The velocity was also randomized within a range so there’s absolutely no consistency to the dynamics either. I could go further into Dada territory by using a chromatic scale, or even random frequencies entering into microtonal realms, but this is just an experiment I did to test some of the functionality within the library.
The cellular automata known as the “Game of Life” originated from work done in 1970 by British mathematician John Horton Conway. Curious about how the game of life sequencer would react to documented patterns, I drew several of them into the sequencer and captured the MIDI output in Ableton Live. In order to use the documented patterns I changed the grid to thirteen by thirteen squares so I could match the patterns exactly. I got some variable musical phrases as a result. A very symmetrical sequence was produced by the pulsar (pictured). Starting the sequencer with the pulsar created a simple, rigid one half bar pattern before all the cells died. Afterward I ran the MIDI into a virtual instrument, looped it, and applied processing to get today’s sound.
Another Processing library that I have looked into is RWMidi Processing which is another relatively simple and easy to use set of MIDI tools. To illustrate how to use the library Wesen, from Ruin & Wesen, produced a screen cast on how to make a “Game of Life” sequencer. I decided to have a look at the sequencer to see if I could route the MIDI from Processing to other applications, like Ableton Live and Reason. I accomplished this using the IAC Driver found in the Audio MIDI Setup utility. I routed the MIDI data to Reason to have a listen to the results, then started manipulating some of the behavior of the sequencer. Later I decided to route the MIDI to Ableton Live. After that, one thing led to another and now I have the building blocks for a new track. Here’s a rendered snippet of the MIDI data that I captured and edited for the piece.