Convert a Bulky Hardware Synth Project to Travel Friendly iOS

Spoiler Alert: It’s AUM from Kymatica

This summer I am performing a piece at two international conferences and streaming a pre-recorded concert at a third. The generative, audiovisual piece is titled SYNTAX and is in collaboration with Mike Hodnick (aka Kindohm). Mike and I debuted the piece in November, 2021 and when we did, we both had complex hardware setups. Mine included a Prophet REV2, an Arturia Keylab 88, Blokus Midihub, and a Yamaha Reface CP. This was a local performance for us in Minneapolis so I did not hesitate to utilize the best instruments I had access to.

But when those instruments are large and I need to travel light I seek out portable alternatives. Some of the gear I’ve travelled with includes: Novation BSII, Novation Circuit, Korg Volca Keys, PreenFM2, Moog Minifooger Delay, Organelle M, Arturia KeyStep, and Korg nanoKONTROL. These devices allow me to play parts and improvise in a natural and organic way. Visuals are often part of these sets, so usually there’s a computer and/or tablet in tow, but generally I reserve the sound making for dedicated hardware.

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. Prague CZ, 2014

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. At Echofluxx in Prague CZ, 2014

For these upcoming performances I came to the conclusion that iOS would do a better job of providing the sound design, signal processing, and multitimbral capabilities that I needed in a carry-on form factor. I surprised myself with this revelation, but it became clear that it was the right decision as I began working. And it wasn’t my first choice. I had started the process using other tools, but using iOS was faster and solved a series of issues I was running into with alternative setups. I paired the iPad with an Arturia KeyStep to play the parts. In addition I included an audio interface (iConnectAUDIO4+), a powered USB hub, and a Korg nanoKONTROL for tactile sliders and knobs.

Toxic from SYNTAX (Mac) with ID700 (iPad)

AUM from Kymatica by Jonatan Liljedahl made this setup possible and convenient. It’s basically a mixer for iOS synths, sequencers, and signal processors supporting AU, AUv3, Audiobus, or Inter-App Audio. Using AUv3 in AUM conveniently allows for multiple instances of the same synth or plugin. The MIDI support is phenomenal and allowed me to configure everything exactly how I wanted. All my effects are on bus sends and controlled with my ancient bus powered Korg nano. I play everything I need to with my Arturia KeyStep. AUM even lets me even split the keyboard (not natively supported on the KeyStep) by specifying a MIDI keyboard range per track. Setup and configuration was much easier than I expected. Every time I wondered if AUM was capable of a feature that I needed I found it with limited menu diving. The interface is clean and only shows you what you need, but access under the hood is merely one or two taps away.

The next thing I needed to do was make similar sounds to what I was getting out of my REV2 and Yamaha CP. I say “similar” knowing that that won’t do for artists who aim for their performances to replicate released recordings of their work. In our case we’ll be performing experimental music interpreting generative, animated, graphic scores. We expect every performance to be different, however every movement also has its own signature, so I need approximations of the original sounds that behave in a similar way. To stand in for the Yamaha CP RdI model I used the excellent VTines, which I wrote about recently. For the Prophet REV2 it took three apps to design the sounds I needed: ID700, Animoog Z, and an early app named Bebot – Robot Synth that’s been around since 2008.

Buchla 700 iOS synthesizer ID700 by Jonathan Schatz

I will write more about these apps in upcoming articles. For now I will say that the Buchla 700 inspired ID700 was new to me and is featured throughout this project. ID700 is unconventional, peculiar, bizarre, and I love it. One of the things that makes it standout are the fourteen complex envelopes per voice. The envelopes have an arbitrary number of “points” or stages that are either linear or logarithmic and each point can be modulated by anything from pressure (including MPE) and note on or off velocity to continuous or one-shot randomness. Furthermore each point has conditional actions that can be used to pause or stop, jump to other points (making looping envelopes possible), and several other actions. ID700 is well worth a look for anyone after experimental sounds, long morphing drones, metallic percussion, and other worldly textures. The learning curve is steeper than conventional synths, but the rewards are well worth the learning it takes to understand this fascinating approach to sound synthesis.

EDIT: In conclusion using iOS isn’t better and doesn’t replace small hardware setups, but it is a fast and convenient choice if you need to approximate a hardware setup/project that’s already been designed. If I was starting on a new project and knew I was traveling with it I might have opted for the Organelle M or Monome Norns over iOS. With so many choices of hardware, software, and combinations of the two the landscape of potential electronic music setups can be daunting. I hope that sharing my approach is useful or interesting. Thanks for listening!

If you’re interested in experiencing SYNTAX, the series of audiovisual works I’ve referenced throughout this article, our three upcoming performances include:

1. June 25, 2022 at the International Digital Media Arts Association (iDMAa) conference in Winona, Minnesota
2. June 29, 2022 pre-recorded performance at the New Interface for Musical Expression (NIME) conference in New Zealand
3. July 5, 2022 at the International Computer Music Conference (ICMC) in Limerick, Ireland Continue reading

VIDEO: John C.S. Keston at ISSTA

Last September 2017 I performed at the Irish Sound in Science Technology and the Arts Conference (ISSTA.ie) in Dundalk, Ireland (video by Daryl Feehely). The performance makes use of a custom Max patch controlled by an iPad, a Novation Circuit, a KeyStep, and a Minifooger Delay pedal. It occurred to me that it might be interesting to share the roots and evolution of this piece, so here goes. Continue reading

AVGM: Rheology

Here’s another movement from my composition Vocalise Sintetica that I performed at Echofluxx in Prague and later during Northern Spark 2014. I named the movement Rheology after the study of the flow of matter in the liquid state. The audiovisual content was created with a Max patch I developed called AVGM (AV Grain Machine). The instruments that I used to create the accompaniment include: DSI Tempest, Bass Station II, Korg Volca Keys, and Memory Man Delay.

AVGM with Tempest, BSII, and Volca Keys

During Northern Spark 2014 I performed a version of Vocalise Sintetica at the Katherine E. Nash Gallery. The event, timed with Northern Spark 2014, also marked the opening of The Audible Edge (May 27 through July 26, 2014), a sound art exhibit of which I am also taking part. Since it was a local performance I decided to introduce the DSI Tempest into the setup (along with the Bass Station II, Korg Volca Keys, and Memory Man Delay).

This led me in a completely different direction than the performance in Prague. I was quite happy with the results so I produced a few studio versions of alternative movements. For these videos I made a screen capture of the AVGM (Audiovisual Grain Machine) and interspersed shots of the instrumentation. Here’s the the first alternative movement of I. Machines. I hope to post a couple more movements at a later date. View photos from the performance below.
Continue reading

Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading