Using Tidal to Control the Roland System-1M

Roland System-1M

This is Mike Hodnick with my first article on audiocookbook.org. I recently added the Roland System-1M semi-modular synth to my studio and live setups recently, and as with any new instrument in my studio I wanted to take it to extremes and see what it could do. It was the perfect occasion to document on audiocookbook.org!

I’m not your typical producer or performer. I write computer code, often improvised, to produce sound both live and in the studio. I use a language and live-coding environment called Tidal to trigger samples, play MIDI devices, and create sequences. Instead of a DAW or sequencer to create sound, I use a text language.

My first real experiment with the System-1M was to automate all of its MIDI Control Change parameters from code simultaneously. It’s kind of like running a few dozen LFO’s at once. I like to do this with all of my instruments to take them to an extreme and maybe even get some interesting sounds out.  As an added twist, I thought it would be fun to also live-patch the modular inputs and outputs on the System-1M while the MIDI automation was taking place. Here is the result:

The source code used for this performance experiment is at the bottom of this post. The only parameters that were not automated in this example were the Oscillator 1 level (kept at 100%), the Mono/Poly toggle (kept on Monophonic), Legato toggle (off), amp crusher (off), and LFO key retrigger (off). Details about the System-1M’s MIDI implementation can be found at roland.com/support/by_product/system-1/owners_manuals/8789.

There are some brilliant sounds coming out of this thing!

By far, my favorite features of this synth are the two oscillators and their controls. Each oscillator supports multiple wave forms, modulation control (oscillator 2 can be ring-modulated from oscillator 1, and oscillator 1 can be cross-modulated from oscillator 2), and a “color” parameter which can be modulated from the LFO or filter/amplitude envelopes. Oscillator 2 also has a fine-tune control. With all of these combined, the possibilities are enormous.

Stay connected at kindohm.com, @kindohm or facebook.com/kindohm for info about Mike’s studio experiments, releases, and performances.

Here’s the source code used to control the System-1M:

-- play a m9 arpeggio, starting from MIDI notes 45, 33, or 57
m $ slow 2 $ (|+| note "[45 33 57]*4") $ mel m9 10 "0*16?"
|+| dur (scale 0.05 0.2 $ slow 1.9666 sine1)
|+| rlpcutoff (scale 0 1 $ density 1.01 sine1)
|+| rhpcutoff (scale 0 1 $ density 1.132 sine1)
|+| rfilteratk (scale 0 0.5 $ slow 1.2 sine1)
|+| rfilterdecay (scale 0.05 0.5 $ density 1.5181 sine1)
|+| rfiltersustain (scale 0.1 1 $ density 1.277777 sine1)
|+| rfilterrelease (scale 0.05 0.5 $ slow 1.523 sine1)
|+| rres (scale 0 0.7 $ density 1.313 sine1)
|+| rfilterenv (scale 0.1 0.9 $ density 1.111 sine1)
|+| rcrush "0" 
|+| rampatk (scale 0 0.5 $ slow 1.213 sine1)
|+| rampdecay (scale 0.05 0.7 $ density 1.333 sine1)
|+| rampsustain (scale 0 1 $ slow 2.313 sine1)
|+| ramprelease (scale 0.05 0.3 $ slow 2.877 sine1)
|+| rpitchenv (scale 0.2 0.8 $ density 1.987 sine1)
|+| rport (scale 0 0.5 $ slow 1.77777 sine1)
|+| rpitchatk (scale 0 0.5 $ density 3.4111 sine1)
|+| rpitchdecay (scale 0 0.5 $ density 1.2222 sine1)
|+| rosc1 "1"
|+| rosc2 (scale 0 1 $ slow 2.6665 sine1)
|+| rosc2tune (scale 0.2 0.8 $ slow 3 sine1
|+| rsub (scale 0 1 $ slow 1.919 sine1)
|+| rnoise (scale 0 1 $ density 3.71771 sine1)
|+| rnoisetype "[0 1]*3"
|+| rsubtype "[0 1]*5"
|+| rlegato "0"
|+| rmono "0.5"
|+| rosc1type (scale 0 1 $ slow 1.77777 sine1)
|+| rosc1range (scale 0 1 $ slow 2.8888 sine1)
|+| rosc1color (scale 0 1 $ density 1.4344 sine1)
|+| rosc1xmod (scale 0 1 $ density 1.30010010 sine1)
|+| rosc1mod (scale 0 1 $ density 3 sine1)
|+| rosc2type (scale 0 1 $ slow 0.9999 sine1)
|+| rosc2range (scale 0 1 $ slow 3.151 sine1)
|+| rosc2color (scale 0 1 $ slow 5.131 sine1)
|+| rosc2ring "[0 1]*9"
|+| rosc2mod (scale 0 1 $ slow 3.141 sine1)
|+| rosc2sync "[0 1]*7"
|+| rlforate (scale 0 1 $ slow 2.17717 sine1)
|+| rlfofilter (scale 0 1 $ slow 3.3333 sine1)
|+| rlfoamp (scale 0 1 $ slow 1.21 sine1)
|+| rlfotype rand
|+| rlfokeytrig "0"
|+| rdelay (scale 0 1 $ sine1)
|+| rdelaytime (scale 0 1 $ slow 3.888 sine1)
|+| rreverb (scale 0 0.4 $ density 1.2331121 sine1)

Interview: The Mind of Video Artist Chris LeBlanc

keston_leblanc_05

Chris LeBlanc is a video artist who I have been collaborating with frequently for the last year and a half. The body of work that he has produced in this short period is remarkable. His improvised visuals for musical performances include mash-ups from rare VHS tapes of bizarre B-movies; usually of the sci-fi, horror, or fighting genres. He augments these mix tapes with circuit-bent Nintendos and a vast collection of other analog video devices to produce uncanny, audio-responsive, visual experiences that enhance musical performances and draw in listeners. Recently he added a modular video synthesis system to his rig and salvaged a nine-by-nine CRT video wall for display.

On Thursday, October 22nd Chris produced visuals for a solo performance of mine at a club with a projector and fifty-one flat screen monitors dispersed throughout the venue. Chris managed to display his video art on the projector and all of the flat screens during my performance. This lasted for about half the set until an irate bar manager found him and made him put the hockey game back on a few of the flatscreens. In addition to his performances he creates music videos and stills using the same equipment and similar techniques. After our most recent show I thought it would be great to share a discussion with Chris here on ACB. I interviewed him on what drives his decisions as an artist and how he makes his analog imagery so engaging while using content and technology from a bygone era.

Read on for the interview with Chris LeBlanc plus more videos and still photo examples of his work. Continue reading

How Kindohm Makes Wicked Breaks in Tidal

It you’re familiar with live coding (performing music through the process of writing code) then you’ve probably heard of Mike Hodnick (aka Kindohm). Mike and I have had the pleasure of performing together on several occasions and I’m thoroughly impressed with his technique and aesthetic. In this video Mike goes in-depth on how he creates breakbeats using Tidal, one of several languages commonly used to do live coding.

Vintage FM: Swapping Bricks for Loaves of Bread

it_speaks

I recently picked up an eighties vintage Yamaha TX81Z FM synthesizer. I’ve always loved the sound of frequency modulation synthesis, but like many of us, lacked the patience to do the programming; especially since most FM synthesizers have hundreds (thousands for the Yamaha FS1R) of parameters that one is expected to edit via a few buttons and a thirty two character LCD.

Understandably FM has largely taken a backseat to subtractive synthesis, wavetable synthesis, and sampling. In the 80s FM was great because memory was expensive. Bell tones, plucked instruments, strings, and brass could be simulated by cleverly selecting an algorithm and adjusting the frequency, levels, and envelopes of the carrier and modulator operators. The price of that sound quality was handling the complexity of the instrument and the time investment that that required.

Soon memory fell in price and the cost of sampling and wavetable synthesizers dropped with it. By the mid-90s the broad popularity of FM synths like the Yamaha DX7 had given way to samplers, ROMplers, and wavetable synths. Perhaps we were attracted to the realism of sampling, or the uncanny quality of pitching familiar sounds into unfamiliar territory. But, all of these synthesis technologies have their place, and what makes FM synthesis relevant to this day is not simulating brass or bell tones, but its ability to uncover new sonic palettes through the complexity of maths, parameters, and algorithms versus the brute force of digital memory banks.

So, how do we navigate this world of nearly infinite possibilities? There are many approaches to this dilemma. Software editors are available, and FM synthesizer plugins like Ableton’s Operator and Native Instruments FM8 are much, much easier to program than their hardware counterparts. All while maintaining flexibility and sonic range. FM8 can load DX7 patches, morph between sounds, or randomize parameters. My approach to this experiment was to exploit a hardware instrument (the TX81Z) already limited by its design.

fm_degradation

I composed this piece by designing a Max for Live process to “degrade” patches in the the Yamaha TX81Z over time. The TX81Z is fairly simple within the scope of FM synths. However, the spectrum of sound is still vast thanks to a few clever features; each of the four operators can have one of eight waveforms, while older FM synths only had sine waves. The degradation process occurs as shuffled parameters in the synth are randomized at a specified pace. Imagine pulling bricks out of a wall and then replacing them with things like a loaf of bread, Legos, or a shoe. The degradation can be interrupted at any moment by the performer to “freeze” a patch for later use, or looped to generate chaotic textures that morph continuously. This excerpt stacks two layers of the degradation process with some panning and reverb to add ambience. Based on these results I anticipate that a lot more is available to be discovered through this and similar techniques. Currently I am working on a way to interpolate between the existing parameter and the “degraded” one for a more legato feel to the entropic process. Stay tuned!

Audiovisual Granular Synthesis of Water Objects

This is a screen capture from a Max project I developed that does interactive, synchronized, granular synthesis of corresponding sound and video that’s tentatively titled AVGM for Audiovisual Grain Machine. I have used the software during aperformance at the Echofluxx festival in Prague and at the Katherine E. Nash gallery for the opening of The Audible Edge exhibition during Northern Spark 2014.