Radical Futures Performance Piece: Rhodonea

On Wednesday, May 8, I debuted a performance piece titled Rhodonea at the Radical Futures conference at University College Cork, Ireland. At the concert I had the privilege of sharing the bill with Brian Bridges, Cárthach Ó Nuanáin, Robin Parmar, and Liz Quirke.

Rhodonea is a series of audiovisual etudes performed as a model of how we might collaborate with near future synthetic entities. Software feeds automated, algorithmic, projected visual cues, tempi, and low frequency oscillations to improvising electronic musicians. The compelling visuals, based on Maurer Roses, suggest melodic, harmonic, and percussive gestures that are modulated by data streaming from the generative animations. Throughout the piece artists adapt to the familiar yet unpredictable graphic scores and corresponding signals. The end result is an impression of how humans might interact with AI in a collaborative and experimental way.

I chose to perform Rhodonea as a soloist although it can be performed by an ensemble of up to four musicians. The generative and improvisational aspects mean that every performance is different than the next, but the piece has a consistent signature that leads the music. This includes modulation corresponding to each rhodonea that is translated into MIDI data and fed to parameters that effect the timbre and other aspects of the instruments. I captured the video above the day after the performance, which illustrates the characteristics of the piece, which I developed in Processing.org.

For this international performance I used four instruments inside Ableton Live 12 controlled by an Arturia KeyStep to minimize the gear I needed to travel with. The Ableton instruments I used were Drift, two instances of Meld (a macrosynth new in Live 12), and Collision. In the video below you can see how the generative graphics are manipulating the filter in Drift.

Convert a Bulky Hardware Synth Project to Travel Friendly iOS

Spoiler Alert: It’s AUM from Kymatica

This summer I am performing a piece at two international conferences and streaming a pre-recorded concert at a third. The generative, audiovisual piece is titled SYNTAX and is in collaboration with Mike Hodnick (aka Kindohm). Mike and I debuted the piece in November, 2021 and when we did, we both had complex hardware setups. Mine included a Prophet REV2, an Arturia Keylab 88, Blokus Midihub, and a Yamaha Reface CP. This was a local performance for us in Minneapolis so I did not hesitate to utilize the best instruments I had access to.

But when those instruments are large and I need to travel light I seek out portable alternatives. Some of the gear I’ve travelled with includes: Novation BSII, Novation Circuit, Korg Volca Keys, PreenFM2, Moog Minifooger Delay, Organelle M, Arturia KeyStep, and Korg nanoKONTROL. These devices allow me to play parts and improvise in a natural and organic way. Visuals are often part of these sets, so usually there’s a computer and/or tablet in tow, but generally I reserve the sound making for dedicated hardware.

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. Prague CZ, 2014

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. At Echofluxx in Prague CZ, 2014

For these upcoming performances I came to the conclusion that iOS would do a better job of providing the sound design, signal processing, and multitimbral capabilities that I needed in a carry-on form factor. I surprised myself with this revelation, but it became clear that it was the right decision as I began working. And it wasn’t my first choice. I had started the process using other tools, but using iOS was faster and solved a series of issues I was running into with alternative setups. I paired the iPad with an Arturia KeyStep to play the parts. In addition I included an audio interface (iConnectAUDIO4+), a powered USB hub, and a Korg nanoKONTROL for tactile sliders and knobs.

Toxic from SYNTAX (Mac) with ID700 (iPad)

AUM from Kymatica by Jonatan Liljedahl made this setup possible and convenient. It’s basically a mixer for iOS synths, sequencers, and signal processors supporting AU, AUv3, Audiobus, or Inter-App Audio. Using AUv3 in AUM conveniently allows for multiple instances of the same synth or plugin. The MIDI support is phenomenal and allowed me to configure everything exactly how I wanted. All my effects are on bus sends and controlled with my ancient bus powered Korg nano. I play everything I need to with my Arturia KeyStep. AUM even lets me even split the keyboard (not natively supported on the KeyStep) by specifying a MIDI keyboard range per track. Setup and configuration was much easier than I expected. Every time I wondered if AUM was capable of a feature that I needed I found it with limited menu diving. The interface is clean and only shows you what you need, but access under the hood is merely one or two taps away.

The next thing I needed to do was make similar sounds to what I was getting out of my REV2 and Yamaha CP. I say “similar” knowing that that won’t do for artists who aim for their performances to replicate released recordings of their work. In our case we’ll be performing experimental music interpreting generative, animated, graphic scores. We expect every performance to be different, however every movement also has its own signature, so I need approximations of the original sounds that behave in a similar way. To stand in for the Yamaha CP RdI model I used the excellent VTines, which I wrote about recently. For the Prophet REV2 it took three apps to design the sounds I needed: ID700, Animoog Z, and an early app named Bebot – Robot Synth that’s been around since 2008.

Buchla 700 iOS synthesizer ID700 by Jonathan Schatz

I will write more about these apps in upcoming articles. For now I will say that the Buchla 700 inspired ID700 was new to me and is featured throughout this project. ID700 is unconventional, peculiar, bizarre, and I love it. One of the things that makes it standout are the fourteen complex envelopes per voice. The envelopes have an arbitrary number of “points” or stages that are either linear or logarithmic and each point can be modulated by anything from pressure (including MPE) and note on or off velocity to continuous or one-shot randomness. Furthermore each point has conditional actions that can be used to pause or stop, jump to other points (making looping envelopes possible), and several other actions. ID700 is well worth a look for anyone after experimental sounds, long morphing drones, metallic percussion, and other worldly textures. The learning curve is steeper than conventional synths, but the rewards are well worth the learning it takes to understand this fascinating approach to sound synthesis.

EDIT: In conclusion using iOS isn’t better and doesn’t replace small hardware setups, but it is a fast and convenient choice if you need to approximate a hardware setup/project that’s already been designed. If I was starting on a new project and knew I was traveling with it I might have opted for the Organelle M or Monome Norns over iOS. With so many choices of hardware, software, and combinations of the two the landscape of potential electronic music setups can be daunting. I hope that sharing my approach is useful or interesting. Thanks for listening!

If you’re interested in experiencing SYNTAX, the series of audiovisual works I’ve referenced throughout this article, our three upcoming performances include:

1. June 25, 2022 at the International Digital Media Arts Association (iDMAa) conference in Winona, Minnesota
2. June 29, 2022 pre-recorded performance at the New Interface for Musical Expression (NIME) conference in New Zealand
3. July 5, 2022 at the International Computer Music Conference (ICMC) in Limerick, Ireland Continue reading

ISSTA Live Recording, September 2017

This recording was made during my appearance at the International Sound in Science Technology and the Arts (ISSTA) Conference in Ireland on September 8, 2017. The piece is a rendition of my composition, Vocalise Sintetica, first performed at Echofluxx14 in Prague. This piece is written so that it is allowed to evolve in a number of ways each time it is performed. Here’s how it changed this time around.

First of all it uses the Audiovisual Grain Machine (AVGM), which I update frequently. This time the updates were minor improvements to speed and efficiency. However, I did add some new audiovisual content. Secondly, in order to travel lightly, I limited the AVGM accompaniment to a single Novation Circuit. I loaded the Circuit with custom patches and samples, and used my Minifooger Delay on the AVGM (I usually leave it dry), but that was it, sonically. Other than that the Arturia KeyStep helped add a few tricks (mainly arps and one drone) to the mix.