MNKINO Film Fest: Familiar Pavement with Aaron Marx

MNKINO-Film-Score-Fest-2015-1024x331

On August 13 I had the pleasure of performing an original film score to picture at the Landmark Center in St. Paul for MNKINO Film Fest 2015. The event featured more than twenty short films with original scores. Most of the scores were performed to the films by a talented orchestra assembled for the event. I wrote and performed the music for the film Familiar Pavement by Aaron Marx.

Performing my four minutes of electronic to the film in real time was quite challenging. I did not use any time lock, relying on the original BPM and finding a good starting point to get the timing right. What made the timing critical (and a little tricky) was that I had processed the original film audio with filters and reverb so that it sat well within the arrangement. However, once I found a good marker in the film and practiced it several times I was well prepared.

The original score used the DSI Tempest for all the drums and the Elektron Analog Four for bass, pads, and an arpeggio. The melody line was sequenced on the Analog Four control voltage track and played on a Korg Monotribe (if you didn’t know that was possible read this). At the event I added the Moog Sub 37 to the setup so I could harmonize and embellish the melody lines.

Vintage FM: Swapping Bricks for Loaves of Bread

it_speaks

I recently picked up an eighties vintage Yamaha TX81Z FM synthesizer. I’ve always loved the sound of frequency modulation synthesis, but like many of us, lacked the patience to do the programming; especially since most FM synthesizers have hundreds (thousands for the Yamaha FS1R) of parameters that one is expected to edit via a few buttons and a thirty two character LCD.

Understandably FM has largely taken a backseat to subtractive synthesis, wavetable synthesis, and sampling. In the 80s FM was great because memory was expensive. Bell tones, plucked instruments, strings, and brass could be simulated by cleverly selecting an algorithm and adjusting the frequency, levels, and envelopes of the carrier and modulator operators. The price of that sound quality was handling the complexity of the instrument and the time investment that that required.

Soon memory fell in price and the cost of sampling and wavetable synthesizers dropped with it. By the mid-90s the broad popularity of FM synths like the Yamaha DX7 had given way to samplers, ROMplers, and wavetable synths. Perhaps we were attracted to the realism of sampling, or the uncanny quality of pitching familiar sounds into unfamiliar territory. But, all of these synthesis technologies have their place, and what makes FM synthesis relevant to this day is not simulating brass or bell tones, but its ability to uncover new sonic palettes through the complexity of maths, parameters, and algorithms versus the brute force of digital memory banks.

So, how do we navigate this world of nearly infinite possibilities? There are many approaches to this dilemma. Software editors are available, and FM synthesizer plugins like Ableton’s Operator and Native Instruments FM8 are much, much easier to program than their hardware counterparts. All while maintaining flexibility and sonic range. FM8 can load DX7 patches, morph between sounds, or randomize parameters. My approach to this experiment was to exploit a hardware instrument (the TX81Z) already limited by its design.

fm_degradation

I composed this piece by designing a Max for Live process to “degrade” patches in the the Yamaha TX81Z over time. The TX81Z is fairly simple within the scope of FM synths. However, the spectrum of sound is still vast thanks to a few clever features; each of the four operators can have one of eight waveforms, while older FM synths only had sine waves. The degradation process occurs as shuffled parameters in the synth are randomized at a specified pace. Imagine pulling bricks out of a wall and then replacing them with things like a loaf of bread, Legos, or a shoe. The degradation can be interrupted at any moment by the performer to “freeze” a patch for later use, or looped to generate chaotic textures that morph continuously. This excerpt stacks two layers of the degradation process with some panning and reverb to add ambience. Based on these results I anticipate that a lot more is available to be discovered through this and similar techniques. Currently I am working on a way to interpolate between the existing parameter and the “degraded” one for a more legato feel to the entropic process. Stay tuned!

How Do You Do Your Live MIDI Sequencing?

Arturia BeatStep Pro

While advancements in music technology have led to amazing new instruments, some popular musical devices and applications fail to accommodate musicians with rudimentary to advanced skills in traditional techniques. Don’t get me wrong! I am all for making music technology accessible to the masses. However, with the inclusion of a few key features these devices and applications could not only be good fun for those without formal music education, but also useful for those with it. Furthermore, including those features would encourage non-traditional musicians to develop new techniques and expand their capabilities, knowledge, range, and interaction with other musicians.

SimpleStepSeq

One example of this is the step sequencer. Once again, don’t get me wrong! I love step sequencing. I even built a rudimentary step sequencer in Max back in 2009. Later on I made it into a Max for Live device that you can download here. Step sequencers are everywhere these days. At one point I remarked that it’s hard to buy a toaster without a step sequencer in it. To date that’s hyperbole, but step sequencers have become ubiquitous in MIDI controllers, iPad apps, synths, drum machines, and modular systems.

I love step sequencers because they encourage us to do things differently and embrace chance. However, for pragmatic music making anyone with some basic keyboard technique will agree that being able to record notes in real time is faster, more efficient, and more expressive than pressing them in via buttons, mouse clicks, or touch screen taps. Simply including a real time record mode in addition to the step sequencing functionality would improve the demographic range and usability of these devices and applications. Many instruments already do this. Elektron machines all have real time recording, as does the DSI Tempest (although it lacks polyphonic recording). Arturia has gone a step (pun intended) in the right direction with the BeatStep Pro allowing for real time recording, also without polyphony. Also, most DAWs handle real time MIDI recording beautifully. So if all of these solutions exist, what’s the problem?

For the last five years I have been developing ways to perform as a soloist without the use of a laptop computer. Q: Wait a minute, don’t all those machines you’re using have computers in them? A: Yes, but they are designed as musical instruments with tactile controls and feedback. They also rarely crash and don’t let you check Facebook (yes, that’s an advantage). There’s a whole series of arguments both for and against using laptops for live performance. Let it be known that I have no problem with anyone using laptops to make music! I do it in the studio all the time. I may do it again live at some point, but currently I have been enjoying developing techniques to work around the limitations that performing without a dedicated computer presents.

Cirklon courtesy of Sequentix

These performances include two to five synchronized MIDI devices with sequencing capabilities, buttons, knobs, pads, and/or a keyboard. I may start with some pre-recorded sequences or improvise the material, but usually it’s a combination of the two. As a musician, producer, and sound designer I have been collecting synthesizers for years and have no shortage of sound making machines. What I am lacking is a way to effectively and inexpensively manage sequencing my existing hardware in real time and with polyphony for live performances. Solutions that do more than I need and therefore cost more than I’d like to spend include the Sequentix Cirklon and Elektron Octatrack. There are also vintage hardware solutions like the EM-U Command Station or Yamaha RS7000. This is something I’ll investigate further, but usually they are bulky and difficult to program on the fly.

Pyramid euclidean screen

What I’d like to see more of are small, modern devices that push the capabilities of live sequencing into new realms while maintaining the practical workflow techniques trained musicians rely on. It’s happening to an extent and internally on the Teenage Engineering OP-1 with their frequent firmware updates. It’s happening on a few iPad apps, but most of the MIDI sequencing apps still lack real time recording and/or polyphonic recording. The Pyramid by Squarp is the most promising development I have seen in this department recently (more about Pyramid at a later date, but for now read this from CDM). Have you found a device or app that handles all your MIDI needs? Do you know about something on the horizon that will make all your MIDI dreams possible? What devices do you use manage your live MIDI performances?

Recent Praise for Isikles

superior_icicles

I am very excited about praise we have received for Isikles, a recent album I produced with Chilean producer Lister Rossel. Ironically yesterday was the Summer Solstice, but Lister has returned to Chile in the Southern Hemisphere where the climate is in the midst of winter. Everyone who has taken the time to listen to Isikles has appreciated the mystery and depth of this work. For example artist, musicians, and educator, Piotr Szyhalski said this after listening:

It’s interesting how it seems to transport my mind in both directions on the timeline. Certain elements send me back, sometimes way back, while others have a future oriented thrust. There is a sense of silent disaster unfolding. I imagine that this is what dying might feel like: when your mind brings you a sense of comfort, which masks the finality of the event…
Piotr Szyhalski

Richard Devine, whom I had the pleasure of performing with recently at the Dakota in Minneapolis, shared these thoughts:

Isikles puts the listener on a beautiful elegant journey of ambient, soundscapes, pulses and textures. One of the best chill out albums to come out in a long time.
Richard Devine

If you haven’t had a chance to listen, try the track Corvus in the player below. It’s one of my favorites. This album filled with analog synthesis, sound design experiments, and field recordings of ice and other things, was a joy to produce. Lister’s talent, work ethic, and conceptual clarity made it a very special collaboration. The full album is available for listening or download on our BandCamp page. Thank you for listening!

john_and_lister

Meta Composition Lets Audience Compose Text Scores

icmlm_screens_1

Now that I have announced my upcoming project Instant Composer: Mad-libbed Music (ICMLM) it is only fair that I share a little bit about the thought process and inspiration behind the piece. The inspiration comes from Pauline Oliveros’ instructional scores, sonic awareness, and deep listening practice. Oliveros explains in a very matter-of-fact fashion in an interview with Darwin Grosse that her text scores are instructions for the musicians or a soloist to follow. Often allowing for broad interpretation and improvisation, the scores rarely include musical symbols or notation.

Much of my own recent work involves the exploitation of chance: duets with traffic, trains, and the Singing Ringing Tree for example. ICMLM surrenders chance to the audience by resigning the writing to minds free of the context concerning the concept, preparations, and development of the “outer composition.” In this way ICMLM is a meta composition that allows the audience to compose within parameters predefined by the artist. However, the limitations placed on the compositional tool provided are not meant to confine participants.

icmlm_screens_2

The most simple implementation of this concept would be a text area where the author writes whatever they want. I didn’t do this in part because I wanted to make the process engaging, inviting, and user friendly. It is not my intent to intimidate the audience. This is an experiment and we will not dismiss what anyone chooses compose for the ensemble. The process of composing happens within a webapp allowing the composer to specify instrumentation, tonality, dynamics, mood, tempo, length, title, and author. All the choices aside from instrumentation and length can freely be entered as any word or phrase the author chooses. In some cases optional choices are offered from a context sensitive menu, but in “mood,” for example, the author must use their own words.

What this means for the “outer composition” and the ensembles constructed for each piece is that the scores are almost entirely unpredictable. Scores might take the form of a Mad Lib when the author chooses to insert nonsense or humorous terms and phrases. On the other hand fascinating challenges might arise as thoughtful and provocative language is used to inspire the improvising musicians. Whatever happens a large part of the motivation and excitement about this project for me is not knowing what will happen until the piece is performed. I am looking forward to collaborating with the minds of our audience through the musical and sonic interpretations of their ideas.