Gestural Music Interface in Processing

A big thanks goes out to Jason Striegel and Nick Watts for inviting us to perform at Make: Day at the Science Museum of Minnesota. I performed with my group Keston and Westdal. Other performers included Savage Aural Hotbed and Tim Kaiser. Besides the performances there were some excellent presenters. Myself, Nils Westdal, our drummer Graham O’Brien, and our intern, Ben Siegel greeted visitors at our table. We presented bits and pieces that Graham used with his drums including sticks, pencils, and a chain. We also showed materials from Unearthed Music, Audio Cookbook, and I revealed a gestural music sequencer (GMS) I developed in Processing.

I was really excited to see the reaction to the sequencer. The application samples video and displays it inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As visitors moved, danced, or gestured in front of the camera notes were generated based on a predetermined scale. Here’s a short sample of what the GMS can produce. I’ll post more about this soon.

Gestural Music Interface

Notes From a Hat in C Minor

I created this sequence of randomized notes using Processing.org with the RWMidi library installed. The notes were randomly selected from a C minor scale. I also randomized the occurrence of the notes to eliminate any rhythmic qualities. The velocity was also randomized within a range so there’s absolutely no consistency to the dynamics either. I could go further into Dada territory by using a chromatic scale, or even random frequencies entering into microtonal realms, but this is just an experiment I did to test some of the functionality within the library.

Notes From a Hat

Pulsar Automata in E Minor

The cellular automata known as the “Game of Life” originated from work done in 1970 by British mathematician John Horton Conway. Curious about how the game of life sequencer would react to documented patterns, I drew several of them into the sequencer and captured the MIDI output in Ableton Live. In order to use the documented patterns I changed the grid to thirteen by thirteen squares so I could match the patterns exactly. I got some variable musical phrases as a result. A very symmetrical sequence was produced by the pulsar (pictured). Starting the sequencer with the pulsar created a simple, rigid one half bar pattern before all the cells died. Afterward I ran the MIDI into a virtual instrument, looped it, and applied processing to get today’s sound.

Pulsar Automata in E Minor

Three Phase Oscillator

Another Processing library that I have looked into is RWMidi Processing which is another relatively simple and easy to use set of MIDI tools. To illustrate how to use the library Wesen, from Ruin & Wesen, produced a screen cast on how to make a “Game of Life” sequencer. I decided to have a look at the sequencer to see if I could route the MIDI from Processing to other applications, like Ableton Live and Reason. I accomplished this using the IAC Driver found in the Audio MIDI Setup utility. I routed the MIDI data to Reason to have a listen to the results, then started manipulating some of the behavior of the sequencer. Later I decided to route the MIDI to Ableton Live. After that, one thing led to another and now I have the building blocks for a new track. Here’s a rendered snippet of the MIDI data that I captured and edited for the piece.

Sine of Life

Sequence of Randomized Pitches and Durations

I’ve been researching audio libraries for Processing recently since I will soon be starting the development of a specialized music application for personal use. I considered using MaxMSP, but Processing seems to suit this project a bit better. If you’re not familiar with Processing, it is an IDE designed for designers, artist, musicians, or anyone interested in exploring new ideas. Although it is mostly used for visual projects there are several examples of music software, like Tiction, which I wrote about in an entry titled Sound For Dali’s Melting Clocks. One of the libraries I’m investigating is called jm-Etude. It’s very easy to implement and use, and makes a few of the features in jMusic, a Java music composition project, accessible in Processing. Here’s some audio from a quick sketch designed to create a random sequence of notes. I also randomized the durations from whole notes to sixteenths, excluding tuplets for the time being.

Randomized Pitch Durations