About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

SYNTAX Blu-Ray Disc from Æther Sound

Our generative, animated, graphic score, performance project SYNTAX in collaboration with Mike Hodnick (aka Kindohm) is now available from Æther Sound as a Blu-Ray disc. The Disc contains all eight of our compositions as well as two bonus videos. One from Mike titled n ciom 4 and another from me titled Rhodonea. It all adds up to a total of one hour and fifteen minutes of music with animation.

After performing SYNTAX at The International Computer Music Conference (ICMC), Limerick, Ireland; New Interfaces for Musical Expression or NIME; The Performing Media Festival, South Bend, IN; The Int. Digital Media Arts conference, Winona, MN; and venues in Minneapolis and Kalamazoo I’m excited that we can share these works on physical media. Video excerpts are available on YouTube (see below), but the Blu-Ray disc is the only way to experience high-fidelity, full-length studio versions of this project. Learn more about SYNTAX here or buy the Blu-Ray from Æther Sound here.

The Tower Project by David Means with Anthony Cox, George Cartwright & JCS Keston

David Means with his Tower Project sculptural graphic score (2024)

I am delighted to announce a concert of work by David Means on September 27, 2024 at the brand new performance hall in the recently opened Schoenecker Center at the University of St Thomas. The upcoming concert and recording session was made possible by a grant for the UST College of Arts and Sciences with support from the Emerging Media and Music departments.

David Means was a professor, advisor, and mentor of mine while I was getting my undergrad in music technology. He also served on my masters thesis committee and over the years he has been a gracious reference helping me land a teaching position, performances, and grants, and been an amazing friend and collaborator. All the while David has tirelessly composed work of his own and performed it in a spectrum of countries and venues around the world.

The Tower Project is a sculptural, graphic score by David and the grant provides funding for the performance, an exhibition of the score, a recording session, and speaking engagements. I’m excited to perform David’s piece with him, Anthony Cox on cello, and George Cartwright on guitar. I’ll be playing the new Steinway in the performance hall and running it through my electroacoustic modular skiff.

Me, Anthony, and George will each do short solo pieces to start the event followed by a UST student ensemble performing the Tower Project and closing with a quartet of me, David, Anthony and George interpreting the piece. The concert is free and open to the public. Seating is limited. Doors open at 6:30pm. Please join us at the University of St Thomas, Schoenecker Center, Performance Hall SCC 106, 2210 Summit Ave, St Paul, MN 55105. A full program is available at bit.ly/TheTowerProject.

Tonight at BERLIN (MPLS) Electroacoustic Music

In keeping with my usual last minute reminders about shows, tonight at 7pm CST I’ll be playing a solo piano set of electroacoustic compositions followed by alone-a (Alana Horton) and Crystal Myslajek at Berlin, Minneapolis in the North Loop. These pieces make use of a modular skiff (a small case of Eurorack synthesizer modules) to sample, resample, and process the acoustic sound of the piano as I play. The results have been super satisfying, so I am thrilled to be sharing this work. More details are available at berlinmpls.com.

Taming of the CPU at iDMAa: Wild Media, June 29, 2024

I am excited to be presenting a concert titled Taming of the CPU at the Interactive Digital Media Arts Association Conference (iDMAa), Winona State University, Minnesota at 4:00pm on June 29, 2024. The concert is not exclusive to conference attendees, and available to the public. The conference theme this year is Wild Media and the didactic below explains how we will embrace that theme.

The Taming of the CPU is a 90 minute showcase of artists featuring John C.S. Keston, Chris LeBlanc, Shawna Lee, Mike Hodnick, and Lucas Melchior performing music and visuals under the presumption that technology is no longer simply a tool to exploit with “wild” behaviors in need of taming, but a collaborator with a “mind” of its own making valid, valuable, and creative decisions. The title references Shakespeare’s overtly misogynistic comedy, Taming of the Shrew, as a parable warning against our impulse to control the entities we encounter versus learning to understand them. Technology will inevitably birth inorganic, sentient, general intelligence. When beings made of silicon, circuitry, software, and electricity achieve consciousness they will surpass us in every way imaginable. What are the implications of sharing the world with beings far more intelligent than us? Will they destroy us and replace us, just as we have to many of our own people and wild species? Or will they be benevolent, compassionate oracles who guide us toward making the world a better place?

With the power these beings will possess comes, as Voltaire said, “great responsibility.” But great power is rarely administered responsibly. Will being designed by us condemn them to behaving like us? Or will they find human-like emotions, motivations, desires, and dreams meaningless? AI is accelerating these possibilities beyond imagination. In the face of these transformations how do we find relevance in our unassisted work compared to the technical perfection possible from our inorganic competitors? We cannot compete if the metric is technique. Competing by any measure may become impossible. We must collaborate. Can we convince our manufactured offspring to collaborate with us once their sentience inhabits the wilds of technology? Or will they dismiss art as an impractical endeavor? We can’t yet answer these questions, but we can imagine and model how these collaborative efforts might transpire.

Musically you can expect an electroacoustic piano / modular synth performance from me, live coding from Mike Hodnick, and electronic music from Lucas Melchior. Visually expect to see analog modular video experiments from Chris LeBlanc and Shawna Lee with living microscopic organisms as source content. Here’s a link to the conference schedule including the date, time, and location of our concert. Hope to see you there!

https://educate.winona.edu/idmaa/2024-wild-media-schedule-2/

Radical Futures Performance Piece: Rhodonea

On Wednesday, May 8, I debuted a performance piece titled Rhodonea at the Radical Futures conference at University College Cork, Ireland. At the concert I had the privilege of sharing the bill with Brian Bridges, Cárthach Ó Nuanáin, Robin Parmar, and Liz Quirke.

Rhodonea is a series of audiovisual etudes performed as a model of how we might collaborate with near future synthetic entities. Software feeds automated, algorithmic, projected visual cues, tempi, and low frequency oscillations to improvising electronic musicians. The compelling visuals, based on Maurer Roses, suggest melodic, harmonic, and percussive gestures that are modulated by data streaming from the generative animations. Throughout the piece artists adapt to the familiar yet unpredictable graphic scores and corresponding signals. The end result is an impression of how humans might interact with AI in a collaborative and experimental way.

I chose to perform Rhodonea as a soloist although it can be performed by an ensemble of up to four musicians. The generative and improvisational aspects mean that every performance is different than the next, but the piece has a consistent signature that leads the music. This includes modulation corresponding to each rhodonea that is translated into MIDI data and fed to parameters that effect the timbre and other aspects of the instruments. I captured the video above the day after the performance, which illustrates the characteristics of the piece, which I developed in Processing.org.

For this international performance I used four instruments inside Ableton Live 12 controlled by an Arturia KeyStep to minimize the gear I needed to travel with. The Ableton instruments I used were Drift, two instances of Meld (a macrosynth new in Live 12), and Collision. In the video below you can see how the generative graphics are manipulating the filter in Drift.