Interactivity Sonified Workshop at INST-INT

The INST-INT 2015 conference, exploring the “…art of Interactivity for objects, environment, and experiences,” just happened and I had the honor and privilege of giving a workshop at the event titled Interactivity Sonified. The intent of the workshop was to teach attendees to sonify their work by triggering, generating, and processing sonic textures or musical forms through interactivity. I covered several basic programming techniques for including sound in projects with input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances.

During the workshop I covered sonification examples in Processing.org and Max while looking at input signals from the Leap Motion, MIDI controllers, video camera, microphone, keyboard, and trackpad. We experimented with recording, looping, reversing, pitch shifting, and granulating sampled audio. We also looked at modeling waveforms and processing them through lowpass, highpass, bandpass filters, delays, and reverbs. Finally we looked at the convolution reverb in Max for Live trying out several of the IRs as well as discussing the technique of sampling impulse responses.

In this video I asked the attendees to pull out their headphones cords after completing the task of triggering sounds with a moving object. The resulting cacophony in the room was quite beautiful! I thoroughly enjoyed giving this workshop and would love to do it again. Please be in touch if you’re part of an organization interested in a workshop like this. For more information you can view the slideshow for the workshop at instint.johnkeston.com. Keep in mind that the slide show just a fraction of the activities. Most of the time was spent applying code and examples to either trigger, generate, or process sound.

SoundsCloud Flashback: Music for People on Shelves

peopleonshelves

I used Ableton Live to produce in real-time and my wavetable glitch machine Max patch to make most of the noises, which I routed into Live using Soundflower.

This five year old set is one of the very first things I ever posted on SoundCloud and it’s 86 minutes from a live solo performance with Minneapolis Art on Wheels. Checkout the original posts here:

Video documentation:
audiocookbook.org/people-on-shelves/

The original article:
audiocookbook.org/music-for-people-on-shelves/

Audiovisual Granular Synthesis of Water Objects

This is a screen capture from a Max project I developed that does interactive, synchronized, granular synthesis of corresponding sound and video that’s tentatively titled AVGM for Audiovisual Grain Machine. I have used the software during aperformance at the Echofluxx festival in Prague and at the Katherine E. Nash gallery for the opening of The Audible Edge exhibition during Northern Spark 2014.

Audiovisual Grain Machine Demo

Here’s a quick demo of the software I am designing to do audiovisual granular synthesis that I’ll be presenting at Moogfest and performing with at Echofluxx. It allows a performer to apply granular synthesis to sound and corresponding video using a touch interface such as MIRA (shown). The audio and video are synchronized in parallel. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. This demo granulates the voice and image of Lister Rossel. In addition I use analogue synthesizers to contrast the digital manipulations.

This work alludes to the speech-to-song illusion discovered by Diana Deutsch. It also evokes an “event fusion” as vocalizations are repeated much faster than humanly possible until they enter the audio range. Adding the corresponding visuals makes it appear uncanny as video and sound are looped in millisecond intervals.

John Keston Performance at Echofluxx14

E14banner

I’m am very excited to be performing at Echofluxx14 this May 7 in Prague. My performance is a couple of weeks after my presentation at Moogfest in Asheville. At Moogfest I’ll be presenting the software that I have been developing for my Echofluxx performance. It’s a Max/MSP application that does audiovisual granular synthesis. The application allows a performer to apply granular synthesis to sound and corresponding video using a touch interface. The audio and video are accurately synchronized creating uncanny effects. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. My performance will include several movements that granulate everyday sounds and images and then contrast them with tones produced using analogue synthesizers. Video documentation is upcoming.

My Echofluxx performance was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.