Convert a Bulky Hardware Synth Project to Travel Friendly iOS

Spoiler Alert: It’s AUM from Kymatica

This summer I am performing a piece at two international conferences and streaming a pre-recorded concert at a third. The generative, audiovisual piece is titled SYNTAX and is in collaboration with Mike Hodnick (aka Kindohm). Mike and I debuted the piece in November, 2021 and when we did, we both had complex hardware setups. Mine included a Prophet REV2, an Arturia Keylab 88, Blokus Midihub, and a Yamaha Reface CP. This was a local performance for us in Minneapolis so I did not hesitate to utilize the best instruments I had access to.

But when those instruments are large and I need to travel light I seek out portable alternatives. Some of the gear I’ve travelled with includes: Novation BSII, Novation Circuit, Korg Volca Keys, PreenFM2, Moog Minifooger Delay, Organelle M, Arturia KeyStep, and Korg nanoKONTROL. These devices allow me to play parts and improvise in a natural and organic way. Visuals are often part of these sets, so usually there’s a computer and/or tablet in tow, but generally I reserve the sound making for dedicated hardware.

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. Prague CZ, 2014

Custom iPad UI with MIRA and AVGM (a Max project) on the Mac. At Echofluxx in Prague CZ, 2014

For these upcoming performances I came to the conclusion that iOS would do a better job of providing the sound design, signal processing, and multitimbral capabilities that I needed in a carry-on form factor. I surprised myself with this revelation, but it became clear that it was the right decision as I began working. And it wasn’t my first choice. I had started the process using other tools, but using iOS was faster and solved a series of issues I was running into with alternative setups. I paired the iPad with an Arturia KeyStep to play the parts. In addition I included an audio interface (iConnectAUDIO4+), a powered USB hub, and a Korg nanoKONTROL for tactile sliders and knobs.

Toxic from SYNTAX (Mac) with ID700 (iPad)

AUM from Kymatica by Jonatan Liljedahl made this setup possible and convenient. It’s basically a mixer for iOS synths, sequencers, and signal processors supporting AU, AUv3, Audiobus, or Inter-App Audio. Using AUv3 in AUM conveniently allows for multiple instances of the same synth or plugin. The MIDI support is phenomenal and allowed me to configure everything exactly how I wanted. All my effects are on bus sends and controlled with my ancient bus powered Korg nano. I play everything I need to with my Arturia KeyStep. AUM even lets me even split the keyboard (not natively supported on the KeyStep) by specifying a MIDI keyboard range per track. Setup and configuration was much easier than I expected. Every time I wondered if AUM was capable of a feature that I needed I found it with limited menu diving. The interface is clean and only shows you what you need, but access under the hood is merely one or two taps away.

The next thing I needed to do was make similar sounds to what I was getting out of my REV2 and Yamaha CP. I say “similar” knowing that that won’t do for artists who aim for their performances to replicate released recordings of their work. In our case we’ll be performing experimental music interpreting generative, animated, graphic scores. We expect every performance to be different, however every movement also has its own signature, so I need approximations of the original sounds that behave in a similar way. To stand in for the Yamaha CP RdI model I used the excellent VTines, which I wrote about recently. For the Prophet REV2 it took three apps to design the sounds I needed: ID700, Animoog Z, and an early app named Bebot – Robot Synth that’s been around since 2008.

Buchla 700 iOS synthesizer ID700 by Jonathan Schatz

I will write more about these apps in upcoming articles. For now I will say that the Buchla 700 inspired ID700 was new to me and is featured throughout this project. ID700 is unconventional, peculiar, bizarre, and I love it. One of the things that makes it standout are the fourteen complex envelopes per voice. The envelopes have an arbitrary number of “points” or stages that are either linear or logarithmic and each point can be modulated by anything from pressure (including MPE) and note on or off velocity to continuous or one-shot randomness. Furthermore each point has conditional actions that can be used to pause or stop, jump to other points (making looping envelopes possible), and several other actions. ID700 is well worth a look for anyone after experimental sounds, long morphing drones, metallic percussion, and other worldly textures. The learning curve is steeper than conventional synths, but the rewards are well worth the learning it takes to understand this fascinating approach to sound synthesis.

EDIT: In conclusion using iOS isn’t better and doesn’t replace small hardware setups, but it is a fast and convenient choice if you need to approximate a hardware setup/project that’s already been designed. If I was starting on a new project and knew I was traveling with it I might have opted for the Organelle M or Monome Norns over iOS. With so many choices of hardware, software, and combinations of the two the landscape of potential electronic music setups can be daunting. I hope that sharing my approach is useful or interesting. Thanks for listening!

If you’re interested in experiencing SYNTAX, the series of audiovisual works I’ve referenced throughout this article, our three upcoming performances include:

1. June 25, 2022 at the International Digital Media Arts Association (iDMAa) conference in Winona, Minnesota
2. June 29, 2022 pre-recorded performance at the New Interface for Musical Expression (NIME) conference in New Zealand
3. July 5, 2022 at the International Computer Music Conference (ICMC) in Limerick, Ireland Continue reading

Drones

Drones is the next piece in the Strands series. These audiovisual compositions illustrate the interpretation of animated, generative, graphic scores written in JavaScript. Drones is made up of animated Bezier curves. Interpretation of this piece is more abstract than the others. I interpret the motion of the curves as layered, morphing drones. This piece might elicit entirely different results from one performance to the next.

Continue reading

Generator

Generator is the next piece in the Strands series. These audiovisual compositions illustrate the interpretation of animated, generative, graphic scores written in Javascript. Generator is made up of connected line segments that that go from left to right, up, or down, but never in reverse. The weight and length of each segment is consistent across the width of the screen and changes once a new set of segments starts again from the left.

I interpret each set of segments as an arpeggio. The tempo of each arpeggio is decided by the segment length. Shorter segments, drawn more quickly, are paired with faster arpeggios. As the line segments wander up and down I generally interpret the Y axis as pitch, but because the direction of each segment is random the pitches are not exact representations of the paths that are displayed; neither is it the intent to exactly follow the visuals. Instead, musicians interpret the score so that human qualities contrast the computer generated visuals.

The aesthetics of the pieces in this series, both visually and sonically, are secondary to the objectives. First that the scores are composed for the purpose of being read by musician(s). Secondly, the artist(s) have space to improvise within their interpretations. Thirdly (in addition to interpretation) aleatoric elements make the pieces significantly different from one performance to the next. Finally, although the performances vary, distinct characteristics identify each piece.

The objectives of these pieces lead to music that is often atonal and/or atemporal. After about a dozen rehearsals, performances, and recordings with a trio and as a soloist it has become apparent that tonality and timing often do emerge. For example, in Generator the tempi of the arpeggios change with each animated progression from left to right. Arbitrary rests are interspersed with random lengths. This amounts to timing without time signatures. And, since the pitches are left up to the artist the notes performed may or may not be in key. In my performance I chose to use a variety of intervals and scales leading up to the chromatic scale at the conclusion.

Strands

Strands is the working title for a series of audiovisual compositions based on the idea of animated, generative, graphic scores. Last year I composed six of these scores written in Javascript for Parking Ramp Project, a performance installation in a seven-level parking ramp with a large cast reflecting on transience, migration, and stability commissioned by Guggenheim fellow, Pramila Vasudevan. While Parking Ramp Project was composed for a trio, Strands is specifically composed for a soloist.

Rain is a new movement in the series and the first that I have produced with video of the animated score. Currently there are five movements in the piece. I performed the first four recently at the ISSTA conference in Cork, Ireland. The visual part of the piece is meant to be read like music but without the use of key or time signatures. Each time the piece is played the visuals are regenerated, so it is never performed the same way twice.

The musician may interpret the visuals in many ways. For example, in Rain lines are animated from the top of the screen to the bottom. Where the line appears horizontally is roughly regarded as pitch and as the line animates the sound is modulated. The lines also vary in weight. Heavier lines are louder and lower in pitch while thinner lines are quieter, generally higher, and sometimes altered with a high-pass filter.

I performed Rain using the Novation Bass Station II, which has a feature (AFX mode overlays) that allows for each note to have entirely different parameters. With this technique I was able to map different timbres to the keybed and use this variety in texture as another way to interpret the score. Keep an eye out for more of these. It is my intent to make videos for all five of the movements and perhaps add one or two more to the series.

ISSTA 2019 Presentation and Performance

Currently I’m in Cork, Ireland to present and perform at the International Sound in Science Technology and the Arts (ISSTA 2019) conference. This year on Thursday, October 31st I am scheduled to give a paper about a project I have been working on titled IGNIEUS, then on Friday I will give a solo electronic music performance related to the paper. I will share more about this soon. For now you can find the program at ISSTA.ie.

This year the conference features a keynote talk and performance by Ableton Live co-creator, Robert Henke who will be performing his work Dust.

Dust is a slow and intense exploration of complex textural sounds, shredded into microscopic particles, and pulsating interlocking loops, recomposed during a improvised performance. The sources are leftovers of digital processes, material created with old analogue synthesisers, noises of all colours and flavours, field recordings; splashing waves from a shingle beach, captured on site in Australia, a massive storm, steam from my Italian coffee maker, crackles from the lead-out groove of a worn record, hum and electrical discharges from a large transformer, collected over several years, and refined and deconstructed in various ways.

I am pleased to be featuring the Organelle M from Critter & Guitari in the setup for my performance on Friday. I have had the instrument for about six weeks, which is long enough to just scratch the surface of the device’s capabilities. I’m also using the Bass Station II with the new 4.14 firmware. More to come!