Tonight at BERLIN (MPLS) Electroacoustic Music

In keeping with my usual last minute reminders about shows, tonight at 7pm CST I’ll be playing a solo piano set of electroacoustic compositions followed by alone-a (Alana Horton) and Crystal Myslajek at Berlin, Minneapolis in the North Loop. These pieces make use of a modular skiff (a small case of Eurorack synthesizer modules) to sample, resample, and process the acoustic sound of the piano as I play. The results have been super satisfying, so I am thrilled to be sharing this work. More details are available at berlinmpls.com.

Taming of the CPU at iDMAa: Wild Media, June 29, 2024

I am excited to be presenting a concert titled Taming of the CPU at the Interactive Digital Media Arts Association Conference (iDMAa), Winona State University, Minnesota at 4:00pm on June 29, 2024. The concert is not exclusive to conference attendees, and available to the public. The conference theme this year is Wild Media and the didactic below explains how we will embrace that theme.

The Taming of the CPU is a 90 minute showcase of artists featuring John C.S. Keston, Chris LeBlanc, Shawna Lee, Mike Hodnick, and Lucas Melchior performing music and visuals under the presumption that technology is no longer simply a tool to exploit with “wild” behaviors in need of taming, but a collaborator with a “mind” of its own making valid, valuable, and creative decisions. The title references Shakespeare’s overtly misogynistic comedy, Taming of the Shrew, as a parable warning against our impulse to control the entities we encounter versus learning to understand them. Technology will inevitably birth inorganic, sentient, general intelligence. When beings made of silicon, circuitry, software, and electricity achieve consciousness they will surpass us in every way imaginable. What are the implications of sharing the world with beings far more intelligent than us? Will they destroy us and replace us, just as we have to many of our own people and wild species? Or will they be benevolent, compassionate oracles who guide us toward making the world a better place?

With the power these beings will possess comes, as Voltaire said, “great responsibility.” But great power is rarely administered responsibly. Will being designed by us condemn them to behaving like us? Or will they find human-like emotions, motivations, desires, and dreams meaningless? AI is accelerating these possibilities beyond imagination. In the face of these transformations how do we find relevance in our unassisted work compared to the technical perfection possible from our inorganic competitors? We cannot compete if the metric is technique. Competing by any measure may become impossible. We must collaborate. Can we convince our manufactured offspring to collaborate with us once their sentience inhabits the wilds of technology? Or will they dismiss art as an impractical endeavor? We can’t yet answer these questions, but we can imagine and model how these collaborative efforts might transpire.

Musically you can expect an electroacoustic piano / modular synth performance from me, live coding from Mike Hodnick, and electronic music from Lucas Melchior. Visually expect to see analog modular video experiments from Chris LeBlanc and Shawna Lee with living microscopic organisms as source content. Here’s a link to the conference schedule including the date, time, and location of our concert. Hope to see you there!

https://educate.winona.edu/idmaa/2024-wild-media-schedule-2/

Radical Futures Performance Piece: Rhodonea

On Wednesday, May 8, I debuted a performance piece titled Rhodonea at the Radical Futures conference at University College Cork, Ireland. At the concert I had the privilege of sharing the bill with Brian Bridges, Cárthach Ó Nuanáin, Robin Parmar, and Liz Quirke.

Rhodonea is a series of audiovisual etudes performed as a model of how we might collaborate with near future synthetic entities. Software feeds automated, algorithmic, projected visual cues, tempi, and low frequency oscillations to improvising electronic musicians. The compelling visuals, based on Maurer Roses, suggest melodic, harmonic, and percussive gestures that are modulated by data streaming from the generative animations. Throughout the piece artists adapt to the familiar yet unpredictable graphic scores and corresponding signals. The end result is an impression of how humans might interact with AI in a collaborative and experimental way.

I chose to perform Rhodonea as a soloist although it can be performed by an ensemble of up to four musicians. The generative and improvisational aspects mean that every performance is different than the next, but the piece has a consistent signature that leads the music. This includes modulation corresponding to each rhodonea that is translated into MIDI data and fed to parameters that effect the timbre and other aspects of the instruments. I captured the video above the day after the performance, which illustrates the characteristics of the piece, which I developed in Processing.org.

For this international performance I used four instruments inside Ableton Live 12 controlled by an Arturia KeyStep to minimize the gear I needed to travel with. The Ableton instruments I used were Drift, two instances of Meld (a macrosynth new in Live 12), and Collision. In the video below you can see how the generative graphics are manipulating the filter in Drift.

Sat. Sept. 9 Dosh and Keston with Tripfacesmile.com

Martin Dosh and I have been playing one-off shows together in various ensembles for longer than I care to divulge. However, unless I’m misremembering this will be the first time as a duet. The Rhodes electric piano has always been our common ground, and I will have my new Osmose Expressive E along as well to share how I have been approaching that instrument. Tripfacesmile.com will start the night off with a set of modular synthesis. Please join us at ROK Bar, 882 7th St. W, Suite 12, St. Paul, MN 55102 on September 9, 2023.

Osmose Expressive E and the Uncanny Valley

The internet has been buzzing with demos of the Osmose Expressive E since they started arriving to VIPs studios earlier this year. I have been fascinated by it since 3D renders of it showed up in November of 2019. Four years later, I finally have it and now that I’ve had a day or two to allow my brain to reassemble itself I’m ready to say something about it.

There are many directions that artists will steer this machine. One is by leveraging physical modeling to emulate acoustic instruments. Doing this requires developing the techniques and having the knowledge to work the Osmose into matching the range and textures of the target instrument. Secondly it requires expertly designed patches that can translate the subtleties of the player’s expression into the expected nuances. Benn Jordan has a great video here that goes into detail about how this can be done. I do not intend to address the debate regarding “should this be done?” in this article, other than to state that there is an ongoing debate (perhaps since music was electrically amplified) along with far reaching consequences to musicians and the music industry at large of which we all ought to be aware. Continue reading