John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research.
John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.
Moog Music has just posted a beautifully produced new video exploring the modulation and sequencing functionality of the Moog Sub 37. Last weekend I did some exploration of my own into modulating the self oscillating filter while driving it through the feedback circuit. Here’s a snippet from the sounds that happened during that experiment. All the sound is from the self oscillating filter. I used exactly none of the three oscillators (OSC1, OSC2, Sub OSC) on the instrument. It’s also running through the Memory Man Delay.
WARNING: The following track contains extremely high and low frequencies. Please start with low volume levels.
You may have noticed that my contributions to ACB have been sparse as of late, so I really appreciate Tom Player’s fascinating articles comparing electronic orchestration to the real thing. I have been busy teaching interactive media at two institutions and just finished an artist residency at Metropolitan State University working with students in the Experimental Music and Intermedia Arts program headed by professor David Means (I’ll be sharing more about that later).
In addition to teaching and other academics I have performing regularly and maintaining a studio practice when my schedule allows. Recently this involved the addition of two new instruments: the Moog Sub 37 and the Elektron Analog Four (A4). The Sub 37 arrived back in September and the A4 in November.
This weekend I had a couple of hours to interface these new additions with my DSI Tempest analog drum machine. These three instruments seem to complement each other really well. The Tempest is gritty and a little unpredictable, the Sub 37 is instantly gratifying and expressive, while the A4 is precise, clean, and technical. Here’s an excerpt from one of my experiments last weekend.
I used Ableton Live to produce in real-time and my wavetable glitch machine Max patch to make most of the noises, which I routed into Live using Soundflower.
This five year old set is one of the very first things I ever posted on SoundCloud and it’s 86 minutes from a live solo performance with Minneapolis Art on Wheels. Checkout the original posts here:
This is the last of seven videos produced documenting my five day recording session and performance series at the Singing Ringing Tree (SRT) in Burnley, UK. There’s a lot more content in the can, but for now this is enough to represent the project. My part of the collaboration with the SRT was simultaneously recorded on site using a Novation Bass Station II connected to a USB battery. I also ran the Bass Station II through a Moog Minifooger Delay.
My last day on site was also the windiest and it turned out that the best wind reduction happened to be a very thin cotton t-shirt wrapped around the binaural head as you can see in the photo below. The strong winds, although useful, made the process quite difficult, and the binaural effect seemed a little less prominent with any sort of wind reduction applied. However, I was able to get couple of good takes by carefully placing the dummy head next to the SRT and opposite the wind. Please checkout the playlist of all six duets (#2 was omitted) on my YouTube channel.
This analog-sourced audiovisual piece is a collaboration with video artist Chris LeBlanc. The visuals were performed with a Hi-8 camera running through Tachyons+ and LoFiFuture processors, and keyed with a Bleep Labs synth. On the music end I’m playing my Moog Sub 37 through my Minifooger Delay and synched up to an Elektron Analog Four. I sent Chris separate signals from the Sub 37 and the A4 that he used to make the visuals respond.