Spelunking with the GMS

A have completed a lot of functionality on my Gestural Music Sequencer recently. I added new keyboard controls to change the durations, create dotted notes, increase and decrease BPM, change to one of four preset scales (including a newly added whole tone scale), and toggle between “free mode” and “BPM mode”.

Free mode ignores the BPM and bases the intervals between notes on the mean brightness level of each frame. Since the brightness levels of video can vary dramatically from one environment to another I added a way to dynamically calibrate free mode. While the GMS is in free mode the up and down arrows calibrate the time intervals between notes, whereas, when in BPM mode the up and down arrows adjust the BPM.

Originally the note durations were set with the up and down arrows. Now it’s done with the bottom row of letters on a qwerty keyboard (z,x,c,v,b,n,m) with z being a whole note and m being a sixty-fourth note. All of these durations can be dotted or un-dotted by pressing the period key. This makes it easy to go from slow to very fast phrases instantly.

Here’s a two minute test recording I made to illustrate some of the new functionality. I used the new whole tone scale, and changed the durations with the new keyboard controls. I felt a bit like a cave explorer while making this recording. I had my Petzl headlamp on so I could gesture with my head as if I was looking down a dark cave, while manipulating the keyboard controls with both hands. I’d include a photo, but that’d be embarrassing.

GMS Spelunking

This entry was posted in GMS, One Sound Every Day, Sound Design and tagged , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

Leave a Reply