Audiovisual Granular Synthesis of Water Objects

This is a screen capture from a Max project I developed that does interactive, synchronized, granular synthesis of corresponding sound and video that’s tentatively titled AVGM for Audiovisual Grain Machine. I have used the software during aperformance at the Echofluxx festival in Prague and at the Katherine E. Nash gallery for the opening of The Audible Edge exhibition during Northern Spark 2014.

Bass Station II Through the Minifooger Delay

IMG_20140429_155228

I finally got my hands on a Minifooger Delay by Moog. I wanted something battery operated and more portable than the Memory Man for my performance at Echofluxx.org on May 7th, 2014 and an upcoming recording project in Northern England. Lucky for me it showed up at Foxtone Music just days before my flight to Prague. Thanks, Eric!

Audiovisual Grain Machine Demo

Here’s a quick demo of the software I am designing to do audiovisual granular synthesis that I’ll be presenting at Moogfest and performing with at Echofluxx. It allows a performer to apply granular synthesis to sound and corresponding video using a touch interface such as MIRA (shown). The audio and video are synchronized in parallel. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. This demo granulates the voice and image of Lister Rossel. In addition I use analogue synthesizers to contrast the digital manipulations.

This work alludes to the speech-to-song illusion discovered by Diana Deutsch. It also evokes an “event fusion” as vocalizations are repeated much faster than humanly possible until they enter the audio range. Adding the corresponding visuals makes it appear uncanny as video and sound are looped in millisecond intervals.

John Keston Performance at Echofluxx14

E14banner

I’m am very excited to be performing at Echofluxx14 this May 7 in Prague. My performance is a couple of weeks after my presentation at Moogfest in Asheville. At Moogfest I’ll be presenting the software that I have been developing for my Echofluxx performance. It’s a Max/MSP application that does audiovisual granular synthesis. The application allows a performer to apply granular synthesis to sound and corresponding video using a touch interface. The audio and video are accurately synchronized creating uncanny effects. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. My performance will include several movements that granulate everyday sounds and images and then contrast them with tones produced using analogue synthesizers. Video documentation is upcoming.

My Echofluxx performance was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.