New Spectral Tablature Collaboration

Spectral Tablature 2015

Part music, part visual art, and part sound design, the collaborative series Spectral Tablature is something I’ve been doing in various forms since 2013. Recently I have been working on a new piece in collaboration with Jasio Stefanski for an upcoming exhibition of his work. I’ll share more information about the exhibit in a future post. For now I’d like to present some of the content that I generated in the process of working on the project.

The image above is spectral analysis of a piece of music that I composed deliberately to produce interesting sonic and visual forms. The piece includes three layers of sequences that slowly speed up and vary in pitch and then slow down again. The speed of the sequence was based on an LFO with a variable rate rather than BPM. This process, along with other techniques, resulted in a form that starts simple, approaches entropy, and then returns to its original simplicity.

Example with Portamento

The final piece will be reprocessed visually through a set of design criteria determined by Jasio. Once the new design has been printed I will digitize the image and reprocess it as sound. The new audio will retain the original frequencies and temporal information but the textural and timbral qualities will be completely transformed.

Rule Based Electronic Music: Whistle While You Work

10735150_1485796591686463_1463344557_n

My rules for this piece were to compose, arrange, and produce music in real-time (edited for length, but no overdubbing) using only the three instruments discussed. The track starts with a sequence I programmed into the Moog Sub 37. Next an arpeggio is introduced from the Elektron Analog Four (A4). Soon afterward we hear the high hats from the DSI Tempest and a long sustained melodic chord progression also from the A4. Finally the rest of the percussion is supplied by the Tempest along with a bass line. From there on out it’s a matter arranging the existing parts (muting and un-muting) with a little real-time knob tweaking.

What makes this piece different for me was sending the output of the Tempest into the A4’s external inputs. This allows for processing external signals through the reverb and delay built into the A4. So when performing a roll on the Tempest, for example, I can turn up the reverb or delay on the A4 external input to add some additional character to the sound. This is going to be really nice for upcoming performances. Since the A4 has two inputs I may just run sends into each then apply reverb to one and the delay (perhaps with a touch of chorus) to the other. This would give me a reverb and delay send for everything plugged into the mixer. Expect to hear more experiments exploiting these and other techniques in upcoming posts.

Hi-8, Bleep Labs, Moog Sub 37, Minifooger, Elektron Analog 4

Sub37+A4

This analog-sourced audiovisual piece is a collaboration with video artist Chris LeBlanc. The visuals were performed with a Hi-8 camera running through Tachyons+ and LoFiFuture processors, and keyed with a Bleep Labs synth. On the music end I’m playing my Moog Sub 37 through my Minifooger Delay and synched up to an Elektron Analog Four. I sent Chris separate signals from the Sub 37 and the A4 that he used to make the visuals respond.

AVGM: Rheology

Here’s another movement from my composition Vocalise Sintetica that I performed at Echofluxx in Prague and later during Northern Spark 2014. I named the movement Rheology after the study of the flow of matter in the liquid state. The audiovisual content was created with a Max patch I developed called AVGM (AV Grain Machine). The instruments that I used to create the accompaniment include: DSI Tempest, Bass Station II, Korg Volca Keys, and Memory Man Delay.

Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading