VIDEO: John C.S. Keston at ISSTA

Last September 2017 I performed at the Irish Sound in Science Technology and the Arts Conference (ISSTA.ie) in Dundalk, Ireland (video by Daryl Feehely). The performance makes use of a custom Max patch controlled by an iPad, a Novation Circuit, a KeyStep, and a Minifooger Delay pedal. It occurred to me that it might be interesting to share the roots and evolution of this piece, so here goes. Continue reading

Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading

Audiovisual Grain Machine Demo

Here’s a quick demo of the software I am designing to do audiovisual granular synthesis that I’ll be presenting at Moogfest and performing with at Echofluxx. It allows a performer to apply granular synthesis to sound and corresponding video using a touch interface such as MIRA (shown). The audio and video are synchronized in parallel. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. This demo granulates the voice and image of Lister Rossel. In addition I use analogue synthesizers to contrast the digital manipulations.

This work alludes to the speech-to-song illusion discovered by Diana Deutsch. It also evokes an “event fusion” as vocalizations are repeated much faster than humanly possible until they enter the audio range. Adding the corresponding visuals makes it appear uncanny as video and sound are looped in millisecond intervals.