TX81Z Patch Degrader with Interpolation

This quick demo illustrates how TX81Z Patch Degrader is interpolating between previous and newly generated parameter values. TX81Z Patch Degrader is a Max for Live MIDI effect that chips away at patches on the TX81Z by randomly changing (or degrading) parameters at a specified rate. What makes the process interesting is that it is possible to ramp up or down (interpolate) to the new value rather than changing it instantaneously.

To create the Max for Live MIDI instrument I started with TX81Z Editor 1.0 by Jeroen Liebregts who was kind enough to share his work on maxforlive.com. I added in the degradation process features and made some adjustments to the interface to make room for the controls. Once I get things shaped up I’ll be happy to share the patch if anyone is interested.

Screen Shot 2015-08-08 at 5.37.43 PM

The features I added are visible in the second panel of the TX81Z Patch Degrader Max MIDI effect. I’ll describe them from the top down:

  1. Level bypass prevents the operator levels from being included in the degradation process so that the sound doesn’t completely die out.
  2. When the interpolate switch is on new values (as long as they have an adequate range) are ramped up or down to the new value based on the rate.
  3. Loop causes the degradation to continue indefinitely by reshuffling after all 73 parameters included have been degraded.
  4. Free/sync toggles between changing the parameters at an arbitrary pace set by rate, or note divisions based on the project’s tempo (therefore sync will only degrade while playing)
  5. Rate adjusts the rate of degradation when in free mode, and the time it takes to ramp up or down to new values when interpolate is on. Rate is milliseconds and ranges from 15ms to 2000ms.
  6. Below rate are the note durations for sync mode ranging from a 1/128th note up to a dotted whole note.
  7. Finally the degrade button starts the process while interrupt stops everything so when you hear something you like you can save the patch on the TX81Z.

The TX81Z has a fairly small buffer for MIDI values, so spraying values at it too quickly will generate the “MIDI Buffer Error”. However, even after getting the error it will continue listening to the incoming data, so even though it might be skipping a parameter here and there it lets me keep throwing things at it. The video below shows how the LCD display responds to the stream of values coming at the machine.

TX81Z Patch Degradation with Interpolation! #glitch #fmsynthesis

A video posted by John Keston (@jkeston) on

I’ve saved quite a few very interesting effects so far and have nearly run out of the 32 patch positions available on the unit. Perhaps the next step is to add a library feature especially since I’m not thrilled about the idea of saving patch banks to cassette!

Screen Shot 2015-08-08 at 6.46.35 PM

Vintage FM: Swapping Bricks for Loaves of Bread

it_speaks

I recently picked up an eighties vintage Yamaha TX81Z FM synthesizer. I’ve always loved the sound of frequency modulation synthesis, but like many of us, lacked the patience to do the programming; especially since most FM synthesizers have hundreds (thousands for the Yamaha FS1R) of parameters that one is expected to edit via a few buttons and a thirty two character LCD.

Understandably FM has largely taken a backseat to subtractive synthesis, wavetable synthesis, and sampling. In the 80s FM was great because memory was expensive. Bell tones, plucked instruments, strings, and brass could be simulated by cleverly selecting an algorithm and adjusting the frequency, levels, and envelopes of the carrier and modulator operators. The price of that sound quality was handling the complexity of the instrument and the time investment that that required.

Soon memory fell in price and the cost of sampling and wavetable synthesizers dropped with it. By the mid-90s the broad popularity of FM synths like the Yamaha DX7 had given way to samplers, ROMplers, and wavetable synths. Perhaps we were attracted to the realism of sampling, or the uncanny quality of pitching familiar sounds into unfamiliar territory. But, all of these synthesis technologies have their place, and what makes FM synthesis relevant to this day is not simulating brass or bell tones, but its ability to uncover new sonic palettes through the complexity of maths, parameters, and algorithms versus the brute force of digital memory banks.

So, how do we navigate this world of nearly infinite possibilities? There are many approaches to this dilemma. Software editors are available, and FM synthesizer plugins like Ableton’s Operator and Native Instruments FM8 are much, much easier to program than their hardware counterparts. All while maintaining flexibility and sonic range. FM8 can load DX7 patches, morph between sounds, or randomize parameters. My approach to this experiment was to exploit a hardware instrument (the TX81Z) already limited by its design.

fm_degradation

I composed this piece by designing a Max for Live process to “degrade” patches in the the Yamaha TX81Z over time. The TX81Z is fairly simple within the scope of FM synths. However, the spectrum of sound is still vast thanks to a few clever features; each of the four operators can have one of eight waveforms, while older FM synths only had sine waves. The degradation process occurs as shuffled parameters in the synth are randomized at a specified pace. Imagine pulling bricks out of a wall and then replacing them with things like a loaf of bread, Legos, or a shoe. The degradation can be interrupted at any moment by the performer to “freeze” a patch for later use, or looped to generate chaotic textures that morph continuously. This excerpt stacks two layers of the degradation process with some panning and reverb to add ambience. Based on these results I anticipate that a lot more is available to be discovered through this and similar techniques. Currently I am working on a way to interpolate between the existing parameter and the “degraded” one for a more legato feel to the entropic process. Stay tuned!

Slam Academy of Electronic Arts

I have recently accepted a position as an adjunct instructor at the Slam Academy in Minneapolis, Minnesota. With two Ableton certified instructors the school is offering a variety of classes in electronic music, but also stretching out into topics like Max for Live and music for video games. I will be teaching occasional master classes and private lessons that focus on my listed specialties of Max/MSP, Max for Live, Processing, sound synthesis, and jazz theory. Please checkout the school at Slam Academy, or like the Facebook page for more information.

Northern Spark In Habit: Living Patterns

Many of you know that I have been working on an eight channel, spatialized sound, projection, and dance collaboration for almost two years. I composed the music entirely using my collection of analog synthesizers. I also designed an octal sound system (eight discrete channels) to spatialize the music and sounds. The performances are Thursday, June 7 at 9pm, Friday, June 8 at 9pm and Saturday, June 9th from 9pm until 6am (yes that is 9 long hours). Checkout In Habit: Living Patterns for the location and other details.

What may be of particular interest to ACB readers is how I am processing the music for spatialization. The outdoor stage is a raised 18′ x 18′ square that the audience can view from any angle. At each corner I have outward facing wedges to project sound toward the audience. Behind the audience I have inward facing speakers on stands, also at each corner of the venue (a public space under the 3rd Avenue bridge in Minneapolis by the Mississippi river across from the St. Anthony Main Movie Theatre).

Using a Max for Live patch that I developed and another that is part of the M4L toolset I am able to rotate sounds around the system in many ways. This includes clockwise and/or anti-clockwise at variable frequencies around the outer or inner quads or both. I can also pan sound between the inner and outer quads with or without the rotation happening simultaneously. Quick adjustments allow me to create cross pans to for sweeping diagonals and so on. I originally thought I could do this with one of many M4L LFOs, but found out this would be impossible. In a future post I will explain why I had to develop my own patch to do this. For now, please enjoy a sadly two channel rough mix of Kolum, the second in the series of sixteen vignettes, and come to the performance to hear it in all of its spatialized, eight channel glory.

Water Dripping Sample in GrainMachine

This evening I have been working on the user interface for GrainMachine, a Max for Live instrument I developed for personal use in October of 2009. In the process of tonight’s testing I came up with this sound. I started with a sample of water dripping, loaded it into GrainMachine and then chose a very narrow grain at a fairly low frequency. Finally I swept slowly through the position of the sample creating the result heard below.

Water Dripping Sample in GrainMachine