Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading

Audiovisual Granular Synthesis of Water Objects

This is a screen capture from a Max project I developed that does interactive, synchronized, granular synthesis of corresponding sound and video that’s tentatively titled AVGM for Audiovisual Grain Machine. I have used the software during aperformance at the Echofluxx festival in Prague and at the Katherine E. Nash gallery for the opening of The Audible Edge exhibition during Northern Spark 2014.

Audiovisual Grain Machine Demo

Here’s a quick demo of the software I am designing to do audiovisual granular synthesis that I’ll be presenting at Moogfest and performing with at Echofluxx. It allows a performer to apply granular synthesis to sound and corresponding video using a touch interface such as MIRA (shown). The audio and video are synchronized in parallel. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. This demo granulates the voice and image of Lister Rossel. In addition I use analogue synthesizers to contrast the digital manipulations.

This work alludes to the speech-to-song illusion discovered by Diana Deutsch. It also evokes an “event fusion” as vocalizations are repeated much faster than humanly possible until they enter the audio range. Adding the corresponding visuals makes it appear uncanny as video and sound are looped in millisecond intervals.

John Keston Performance at Echofluxx14

E14banner

I’m am very excited to be performing at Echofluxx14 this May 7 in Prague. My performance is a couple of weeks after my presentation at Moogfest in Asheville. At Moogfest I’ll be presenting the software that I have been developing for my Echofluxx performance. It’s a Max/MSP application that does audiovisual granular synthesis. The application allows a performer to apply granular synthesis to sound and corresponding video using a touch interface. The audio and video are accurately synchronized creating uncanny effects. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. My performance will include several movements that granulate everyday sounds and images and then contrast them with tones produced using analogue synthesizers. Video documentation is upcoming.

My Echofluxx performance was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.

Multitouch on a Mac without a Touchscreen

fingerpinger

As you may have noticed it’s been a little quiet around here lately. The reason for this is that since the end of 2013 I have been keeping myself busy with a studio remodel (more on that later) followed by concentrating on preparations for a performance (I’m pleased to announce) at Echofluxx 14 in Prague, May 2014. Here’s a quick note about Echofluxx 13 from their site:

The Echofluxx 13 is a festival of new media, visual art, and experimental music produced by Efemera of Prague with Anja Kaufmann and Dan Senn, co-directors. In cooperation with Sylva Smejkalová and early reflections, the Prague music school (HAMU), and Academy of Fine Arts (AVU), Echofluxx 13 will present international and Czech presenters in a five-day festival at the Tracfačka Arena in Prague, Kurta Konráda 1, Prague 9, May 7-11, 2013. For more information contact: info@echofluxx.org

I’ll discuss more details about this upcoming performance in another post. For now I would like to bring attention to the possibility of using the Mac trackpad and/or Apple’s Magic Trackpad for multitouch. My performance at Echofluxx involves using custom built software to loop granular audiovisual media. This idea evolved from projects in the past that used a 32″ touchscreen. This time the media will be projected, so I naturally decided to use the iPad as the controller. I built the project using Cycling ’74 Max and MIRA, which was very convenient, but I couldn’t get over the latency using the iPad over WiFi for multitouch controls.

I decided that the most convenient alternative would be to use the trackpad on the Mac laptop. Max has an object called “mousestate” that polls button-status and cursor-position information from the default pointer device. However, it is not designed to take advantage of multitouch data. This is where Fingerpinger comes in. Fingerpinger was able to detect ten independent touch points (perhaps more but I was out of fingers) on the built in trackpad on my MacBook Pro. Which begs the question; how did I take that screenshot?

Ten touchpoints on such a small surface is probably impractical, but I only need two; one for X-Y data and a second one for volume. Most importantly I wanted the audiovisual content to be activated simply by touching the trackpad rather than having to click or hold down a key. Fortunately Fingerpinger has a state value for each touchpoint that I was able to use to activate on touch and release. The latency is hardly noticeable compared to an iPad over WiFi, and I have also simplified my setup meaning I can travel with less equipment and rely on fewer technologies. I still like the idea of using an iPad for multitouch controls, mostly because of the opportunities for visual feedback. But for this particular application Fingerpinger is a great solution.