Osmose Expressive E and the Uncanny Valley

The internet has been buzzing with demos of the Osmose Expressive E since they started arriving to VIPs studios earlier this year. I have been fascinated by it since 3D renders of it showed up in November of 2019. Four years later, I finally have it and now that I’ve had a day or two to allow my brain to reassemble itself I’m ready to say something about it.

There are many directions that artists will steer this machine. One is by leveraging physical modeling to emulate acoustic instruments. Doing this requires developing the techniques and having the knowledge to work the Osmose into matching the range and textures of the target instrument. Secondly it requires expertly designed patches that can translate the subtleties of the player’s expression into the expected nuances. Benn Jordan has a great video here that goes into detail about how this can be done. I do not intend to address the debate regarding “should this be done?” in this article, other than to state that there is an ongoing debate (perhaps since music was electrically amplified) along with far reaching consequences to musicians and the music industry at large of which we all ought to be aware.

Instead I’d like to share how I intend to steer my use of the Osmose. In a nutshell this will be a similar approach to the way I use most of my synthesizers – by using it to discover new, unheard, distinct, weird, and wonderful textures. And perhaps even finding ways to use it not conceived by the designers. The beauty of doing this with the Osmose is that with MPE (MIDI polyphonic expression) and three axes per key, new depths of expression are possible than ever were before with conventional key beds. That in itself presents a bit of a paradox because I don’t think that most listeners will grasp that much of the music made with the Osmose is played on a keyboard. Even listening back to some of my own initial playing leads me to imagine exotic acoustic instruments versus a keyboard driven synthesizer. This creates a sort of “irrational juxtaposition” of timbre with technique, in parallel with what you might see in surrealism or AI art.

This leads us to the uncanny valley. I think the Osmose is inherently susceptible to an emotional response akin to our revulsion toward humanlike robots. I admit that this may seem a bit exaggerated, but I consider this evidence of a marked advancement in music technology; the Osmose gives keyboard players the ability to inject nearly as much expression into a synthesizer engine as musicians are able to express with tactile acoustic instruments.

So the question is how do we use all of this expression and nuance without evoking the uncanny valley? I can’t answer that yet and perhaps the we don’t need to avoid it at all. Perhaps evoking the uncanny valley will be the intent for some of us. It will take time for the world to get used to it, just like it had to get used to moving pictures, television, and the internet. Until then we can appreciate the alien newness or uncanny surrealism the instrument summons. What I’ve learned in the few hours I’ve had with the Osmose is that I need to develop new techniques, not only in my playing, but in my approach to sound design. I believe doing this and finding my own personal style of playing the Osmose will be a long, challenging road and I expect I’ll enjoy the ride.

This entry was posted in Audio News, Sound Design and tagged , , , , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

Leave a Reply