Novation Circuit Randomized Patches

my_circuit

In my mind, sound design is at its best when it is a process of discovery. At its worst it can be an unfortunate exercise in mimicry. I am fascinated by the process of discovering sound through happy accidents. One of the techniques I have exploited frequently in this regard is synthesizer patch randomization. For example, the Yamaha TX81Z sounds great when randomized, or better yet, “degraded” with shuffled parameter values interpolated based on a time unit or clock division. The PreenFM2 has patch randomization built directly into the instrument!

So, it wasn’t long after picking up a Novation Circuit that I had the urge to use a similar shortcut to mine fantastic and otherworldly sounds from the unit. Full MIDI specification for the Circuit is available so that development of a standalone randomizer is possible, but Isotonik Studios published a free Max for Live editor in partnership with Novation. Max for Live patches are inherently editable so I decided to start there.

Send Random Values

It took me a couple of hours to get into the guts of the editor and setup a drop down menu for randomization. The drop down has choices to either “randomize all” (not quite all parameters), or randomize one of seven sets of grouped parameters like the oscillator section, mod matrix, or LFOs. At his stage I haven’t included the EQ section, voice controls, or macro controls. I probably won’t add the EQ, but the macro controls might offer some interesting possibilities. The image above shows a simple subpatch I made that takes a bang and outputs the random values for the oscillator section. Unfortunately, I can not legally share my mods based on Isotonik’s and Novation’s EULAs. However, you’ll need little more than a basic understanding of Max to do this yourself. Checkout the video and let me know what you think in the comments.

Interactivity Sonified Workshop at INST-INT

The INST-INT 2015 conference, exploring the “…art of Interactivity for objects, environment, and experiences,” just happened and I had the honor and privilege of giving a workshop at the event titled Interactivity Sonified. The intent of the workshop was to teach attendees to sonify their work by triggering, generating, and processing sonic textures or musical forms through interactivity. I covered several basic programming techniques for including sound in projects with input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances.

During the workshop I covered sonification examples in Processing.org and Max while looking at input signals from the Leap Motion, MIDI controllers, video camera, microphone, keyboard, and trackpad. We experimented with recording, looping, reversing, pitch shifting, and granulating sampled audio. We also looked at modeling waveforms and processing them through lowpass, highpass, bandpass filters, delays, and reverbs. Finally we looked at the convolution reverb in Max for Live trying out several of the IRs as well as discussing the technique of sampling impulse responses.

In this video I asked the attendees to pull out their headphones cords after completing the task of triggering sounds with a moving object. The resulting cacophony in the room was quite beautiful! I thoroughly enjoyed giving this workshop and would love to do it again. Please be in touch if you’re part of an organization interested in a workshop like this. For more information you can view the slideshow for the workshop at instint.johnkeston.com. Keep in mind that the slide show just a fraction of the activities. Most of the time was spent applying code and examples to either trigger, generate, or process sound.

AVGM: Rheology

Here’s another movement from my composition Vocalise Sintetica that I performed at Echofluxx in Prague and later during Northern Spark 2014. I named the movement Rheology after the study of the flow of matter in the liquid state. The audiovisual content was created with a Max patch I developed called AVGM (AV Grain Machine). The instruments that I used to create the accompaniment include: DSI Tempest, Bass Station II, Korg Volca Keys, and Memory Man Delay.

AVGM with Tempest, BSII, and Volca Keys

During Northern Spark 2014 I performed a version of Vocalise Sintetica at the Katherine E. Nash Gallery. The event, timed with Northern Spark 2014, also marked the opening of The Audible Edge (May 27 through July 26, 2014), a sound art exhibit of which I am also taking part. Since it was a local performance I decided to introduce the DSI Tempest into the setup (along with the Bass Station II, Korg Volca Keys, and Memory Man Delay).

This led me in a completely different direction than the performance in Prague. I was quite happy with the results so I produced a few studio versions of alternative movements. For these videos I made a screen capture of the AVGM (Audiovisual Grain Machine) and interspersed shots of the instrumentation. Here’s the the first alternative movement of I. Machines. I hope to post a couple more movements at a later date. View photos from the performance below.
Continue reading

Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading