Gestural Music Interface in Processing

A big thanks goes out to Jason Striegel and Nick Watts for inviting us to perform at Make: Day at the Science Museum of Minnesota. I performed with my group Keston and Westdal. Other performers included Savage Aural Hotbed and Tim Kaiser. Besides the performances there were some excellent presenters. Myself, Nils Westdal, our drummer Graham O’Brien, and our intern, Ben Siegel greeted visitors at our table. We presented bits and pieces that Graham used with his drums including sticks, pencils, and a chain. We also showed materials from Unearthed Music, Audio Cookbook, and I revealed a gestural music sequencer (GMS) I developed in Processing.

I was really excited to see the reaction to the sequencer. The application samples video and displays it inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As visitors moved, danced, or gestured in front of the camera notes were generated based on a predetermined scale. Here’s a short sample of what the GMS can produce. I’ll post more about this soon.

Gestural Music Interface

6 thoughts on “Gestural Music Interface in Processing

  1. Sounds like you made a badass digital theremin! I really wish I could’ve made it yesterday. Without knowing when you guys were going to play, I couldn’t plan around work and school. I’d definitely like to know more about your GMS and Processing.

  2. Thanks, Jake. It’s a work in progress and the code is still pretty messy, but once I’ve got it in a better place I’ll considering sharing it here.

  3. I’m glad you made it to Make: Day, I was too busy running around like a maniac to get a chance to talk to you guys. I did get some photos though, I loved the performance.

    Thanks for hanging with all of us out there. :)

Leave a Reply