About John CS Keston
John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research.
John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.
If I remember correctly, from the last post, you said it registers a hit point from the brightest pixel(s).
Does it only accept one hitpoint?
__
I’m thinking it would be pretty badass to have it register brightest pixel of colorX and you could have 2-4 people standing back in the distance with LEDs+batteries taped to their fingers. Each LED color could run through a different component on the rack within reason — so in theory we could set up a whole orchestra.
Just a nifty idea.
This is the first time I see a work like this. Is very interesting. Maybe if you use two webcams, or divide the screen in two sections, could add another plane. Even the brightness of the pixel could add texture to the effect.
Sweet! Thanks for giving us a peek at it John, I’ve just added another item to my gear/software lust list.
I thought of doing something like what you’re suggesting, Jake, only with huge, public space projections and laser pointers. :)
Hey Marko, Thanks for your comment. I’ll probably setup some way of applying either brightness or color to CC data to manipulate a filter or LFO.
That is the best sound I’ve heard you use for the gesture sequencer yet. You should try to solo this way live for a track or two. Awesome!
@keston If you were to use different colored lasers things would get pretty expensive, but it would be cool.
Idea: At your next show you should give out LED rings for the crowd to put on their fingers, and have them go nuts. You could be controlling the instrument types up on stage – might be cool?
Very nice video John!
I like how this reminds me of the Theremin! It also looks like something Kitaro would play–haha!
I’ve been subscribed to your blog for a while now but this is the first time I’ve seen this video.
I used to VJ and something like this would have been great for live shows. It would be good to see the GMS or a version of it as a plugin for VJ software like VJamm Pro which is the software I used to use.
Instead of using the XY axis to produce notes it could be used to trigger video clips on the X and audio samples on the Y or something like that.
Using a camera pointed into a crowd and then projecting the results could create a self produced audio visual experience for clubs or installations.
Just a thought, I’ll stop waffling on now.
It’s a great piece of software and I’m enjoying the results. Can’t wait to see the progression.