Gestural Music Sequencer Generative Track Competition

Now that the GMS beta has been out since December 2009, I thought it would be fun to start a competition to produce a track using this tool. Unearthed Music has agreed to consider the winning track for a spot on their upcoming compilation, Unearthed Artifacts Volume One.

The rules for the competition are simple. Create an instrumental track using the GMS. Every layer in the composition must be generated by video input fed into the GMS either through a camera, or by loading a pre-recorded video clip. There are no limitations as to what software or hardware is used to interface with the GMS in order to create the instrument sounds and produce the piece.

Editing and looping of the GMS MIDI output is allowed within reason. Please refrain from looping phrases that are shorter than one bar, or shifting several notes to tailor the melodies. I suggest experimenting with the note and duration probability distributions. All the drums and rhythmic patterns must be created using the GMS as well.

Write one-hundred to three-hundred words about how you produced your track and post it as a comment to this entry with a link to a 192Kbps or better MP3 file of the complete track. Links to a bio or videos about your process are great too. The track must be licensed under the Creative Commons Attribution-Share Alike 3.0 Unported License. The tracks will be judged by a panel of representatives from Unearthed Music and myself. The submission deadline is Tuesday, June 1, 2010. Thanks, and have fun!

This entry was posted in Audio News, GMS, Sound Design and tagged , , , , , , , , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

One thought on “Gestural Music Sequencer Generative Track Competition

  1. Pingback: Gestural Music Sequencer Generative Music Contest » Synthtopia

Leave a Reply