VIDEO: John C.S. Keston at ISSTA

Last September 2017 I performed at the Irish Sound in Science Technology and the Arts Conference (ISSTA.ie) in Dundalk, Ireland (video by Daryl Feehely). The performance makes use of a custom Max patch controlled by an iPad, a Novation Circuit, a KeyStep, and a Minifooger Delay pedal. It occurred to me that it might be interesting to share the roots and evolution of this piece, so here goes.

In my mind the origins of the piece can be traced back to my interest in granular synthesis. In 2009 I developed a Max based instrument I titled Grain Machine. It was essentially a touchscreen controlled granular synthesizer. I used this instrument for a number of projects including Words to Dead Lips, a dance piece in collaboration with Aniccha Arts. The instrument was played via a simple iOS interface designed in TouchOSC. A performer could access five samples, granulate them on an X-Y grid, or spin a virtual wheel with friction modeling to scrub or scratch through them.

A couple years later I had the crazy idea to granulate video and corresponding sound in realtime. I called this technique audiovisual granular synthesis. This led to Voice Lessons that was first exhibited for a graduate critique seminar in November 2011. Later the piece was shown at several exhibitions including Interface at the Walker’s Point Center for the Arts in Milwaukee. I also created another installation for my thesis exhibition called Machine Machine that used the same technique.

This work continued to evolve as I converted the software for interactive installations into an audiovisual performance instrument. New content for the platform seemed to ask new questions and create entirely different experiences on the platform. I setup an iPad running MIRA for a controller and implemented looping, banks for content retrieval, audio volume, and video fade mechanisms. I titled the software AVGM or Audiovisual Grain Machine. AVGM is a big part of Vocalise Sintetica, first performed at Echofluxx in Prague with a Bass Station II and a Korg Volca Keys. Next time was at the Katherine E. Nash Gallery for an exhibition titled The Audible Edge. I’ve played the piece at a few other venues as well as this last performance at ISSTA.

The piece is never played the same way twice. Not just because of the improvisation written into it, but because it evolves into something new with each performance. Here’s how: 1) I often update the software to improve the performance and/or functionality. 2) I develop new audiovisual content designed to express new themes, and make new sounds. 3) I select different electronic instruments to play along side the AVGM. 4) I compose new musical themes to go along with the textural content. 5) I reimagine the meaning and relevance of the piece within the context of new content and a new time period. Consequently, although each performance has the same title, the music, visuals, and experience are very different.

So, what’s the point of making all these changes? I could play the piece in a very similar fashion every time. This would be no less valid or relevant. The venues are typically conferences or festivals that happen once a year, therefore the piece is performed infrequently. This means my thinking, techniques, process, and interests have likely changed between shows. If I were playing the piece daily or even weekly it would probably be more similar between adjacent events. As far as I know, my audience is not expecting to recognize the piece from recordings or the last concert, and as a solo performance, there are no additional performers to rehearse. As a result, I choose to let the piece transform naturally rather than duplicate a performance I did months ago.

Leave a Reply