Vintage FM: Swapping Bricks for Loaves of Bread

it_speaks

I recently picked up an eighties vintage Yamaha TX81Z FM synthesizer. I’ve always loved the sound of frequency modulation synthesis, but like many of us, lacked the patience to do the programming; especially since most FM synthesizers have hundreds (thousands for the Yamaha FS1R) of parameters that one is expected to edit via a few buttons and a thirty two character LCD.

Understandably FM has largely taken a backseat to subtractive synthesis, wavetable synthesis, and sampling. In the 80s FM was great because memory was expensive. Bell tones, plucked instruments, strings, and brass could be simulated by cleverly selecting an algorithm and adjusting the frequency, levels, and envelopes of the carrier and modulator operators. The price of that sound quality was handling the complexity of the instrument and the time investment that that required.

Soon memory fell in price and the cost of sampling and wavetable synthesizers dropped with it. By the mid-90s the broad popularity of FM synths like the Yamaha DX7 had given way to samplers, ROMplers, and wavetable synths. Perhaps we were attracted to the realism of sampling, or the uncanny quality of pitching familiar sounds into unfamiliar territory. But, all of these synthesis technologies have their place, and what makes FM synthesis relevant to this day is not simulating brass or bell tones, but its ability to uncover new sonic palettes through the complexity of maths, parameters, and algorithms versus the brute force of digital memory banks.

So, how do we navigate this world of nearly infinite possibilities? There are many approaches to this dilemma. Software editors are available, and FM synthesizer plugins like Ableton’s Operator and Native Instruments FM8 are much, much easier to program than their hardware counterparts. All while maintaining flexibility and sonic range. FM8 can load DX7 patches, morph between sounds, or randomize parameters. My approach to this experiment was to exploit a hardware instrument (the TX81Z) already limited by its design.

fm_degradation

I composed this piece by designing a Max for Live process to “degrade” patches in the the Yamaha TX81Z over time. The TX81Z is fairly simple within the scope of FM synths. However, the spectrum of sound is still vast thanks to a few clever features; each of the four operators can have one of eight waveforms, while older FM synths only had sine waves. The degradation process occurs as shuffled parameters in the synth are randomized at a specified pace. Imagine pulling bricks out of a wall and then replacing them with things like a loaf of bread, Legos, or a shoe. The degradation can be interrupted at any moment by the performer to “freeze” a patch for later use, or looped to generate chaotic textures that morph continuously. This excerpt stacks two layers of the degradation process with some panning and reverb to add ambience. Based on these results I anticipate that a lot more is available to be discovered through this and similar techniques. Currently I am working on a way to interpolate between the existing parameter and the “degraded” one for a more legato feel to the entropic process. Stay tuned!

This entry was posted in Max for Live, One Synthesizer Sound Every Day, Sound Design and tagged , , , , , , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

Leave a Reply