VIDEO: John C.S. Keston at ISSTA

Last September 2017 I performed at the Irish Sound in Science Technology and the Arts Conference (ISSTA.ie) in Dundalk, Ireland (video by Daryl Feehely). The performance makes use of a custom Max patch controlled by an iPad, a Novation Circuit, a KeyStep, and a Minifooger Delay pedal. It occurred to me that it might be interesting to share the roots and evolution of this piece, so here goes.

In my mind the origins of the piece can be traced back to my interest in granular synthesis. In 2009 I developed a Max based instrument I titled Grain Machine. It was essentially a touchscreen controlled granular synthesizer. I used this instrument for a number of projects including Words to Dead Lips, a dance piece in collaboration with Aniccha Arts. The instrument was played via a simple iOS interface designed in TouchOSC. A performer could access five samples, granulate them on an X-Y grid, or spin a virtual wheel with friction modeling to scrub or scratch through them.

A couple years later I had the crazy idea to granulate video and corresponding sound in realtime. I called this technique audiovisual granular synthesis. This led to Voice Lessons that was first exhibited for a graduate critique seminar in November 2011. Later the piece was shown at several exhibitions including Interface at the Walker’s Point Center for the Arts in Milwaukee. I also created another installation for my thesis exhibition called Machine Machine that used the same technique.

This work continued to evolve as I converted the software for interactive installations into an audiovisual performance instrument. New content for the platform seemed to ask new questions and create entirely different experiences on the platform. I setup an iPad running MIRA for a controller and implemented looping, banks for content retrieval, audio volume, and video fade mechanisms. I titled the software AVGM or Audiovisual Grain Machine. AVGM is a big part of Vocalise Sintetica, first performed at Echofluxx in Prague with a Bass Station II and a Korg Volca Keys. Next time was at the Katherine E. Nash Gallery for an exhibition titled The Audible Edge. I’ve played the piece at a few other venues as well as this last performance at ISSTA.

The piece is never played the same way twice. Not just because of the improvisation written into it, but because it evolves into something new with each performance. Here’s how: 1) I often update the software to improve the performance and/or functionality. 2) I develop new audiovisual content designed to express new themes, and make new sounds. 3) I select different electronic instruments to play along side the AVGM. 4) I compose new musical themes to go along with the textural content. 5) I reimagine the meaning and relevance of the piece within the context of new content and a new time period. Consequently, although each performance has the same title, the music, visuals, and experience are very different.

So, what’s the point of making all these changes? I could play the piece in a very similar fashion every time. This would be no less valid or relevant. The venues are typically conferences or festivals that happen once a year, therefore the piece is performed infrequently. This means my thinking, techniques, process, and interests have likely changed between shows. If I were playing the piece daily or even weekly it would probably be more similar between adjacent events. As far as I know, my audience is not expecting to recognize the piece from recordings or the last concert, and as a solo performance, there are no additional performers to rehearse. As a result, I choose to let the piece transform naturally rather than duplicate a performance I did months ago.

This entry was posted in Max, Performance, Sound Design and tagged , , , , , , , , , , , , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

Leave a Reply