Video of the Gestural Music Sequencer

Here’s a video of me playing with my Gestural Music Sequencer. I’ll upload a better version at some point, but I think you can a least get a sense of how you might use this kind of tool (I’m talking about the sequencer, not the performer). As you can see the video has been mirrored so it’s easier to follow your own movements.

To reiterate how the sequencer works, the X axis of the brightest pixel is used to determine the pitch, while the Y axis is the dynamics. The application outputs MIDI data that I’m routing to Reason. I’ve programmed the up and down arrows on the keyboard to increase or decrease a multiplier that along with the 15 fps frame rate determines the time between each note on event. I’ve also enabled a group of keys to adjust the transposition.

 

8 thoughts on “Video of the Gestural Music Sequencer

  1. If I remember correctly, from the last post, you said it registers a hit point from the brightest pixel(s).

    Does it only accept one hitpoint?

    __

    I’m thinking it would be pretty badass to have it register brightest pixel of colorX and you could have 2-4 people standing back in the distance with LEDs+batteries taped to their fingers. Each LED color could run through a different component on the rack within reason — so in theory we could set up a whole orchestra.

    Just a nifty idea.

  2. This is the first time I see a work like this. Is very interesting. Maybe if you use two webcams, or divide the screen in two sections, could add another plane. Even the brightness of the pixel could add texture to the effect.

  3. I thought of doing something like what you’re suggesting, Jake, only with huge, public space projections and laser pointers. :)

    Hey Marko, Thanks for your comment. I’ll probably setup some way of applying either brightness or color to CC data to manipulate a filter or LFO.

  4. That is the best sound I’ve heard you use for the gesture sequencer yet. You should try to solo this way live for a track or two. Awesome!

  5. @keston If you were to use different colored lasers things would get pretty expensive, but it would be cool.

    Idea: At your next show you should give out LED rings for the crowd to put on their fingers, and have them go nuts. You could be controlling the instrument types up on stage – might be cool?

  6. Very nice video John!
    I like how this reminds me of the Theremin! It also looks like something Kitaro would play–haha!

  7. I’ve been subscribed to your blog for a while now but this is the first time I’ve seen this video.
    I used to VJ and something like this would have been great for live shows. It would be good to see the GMS or a version of it as a plugin for VJ software like VJamm Pro which is the software I used to use.
    Instead of using the XY axis to produce notes it could be used to trigger video clips on the X and audio samples on the Y or something like that.
    Using a camera pointed into a crowd and then projecting the results could create a self produced audio visual experience for clubs or installations.
    Just a thought, I’ll stop waffling on now.
    It’s a great piece of software and I’m enjoying the results. Can’t wait to see the progression.

Comments are closed.