Interactivity Sonified Workshop at INST-INT

The INST-INT 2015 conference, exploring the “…art of Interactivity for objects, environment, and experiences,” just happened and I had the honor and privilege of giving a workshop at the event titled Interactivity Sonified. The intent of the workshop was to teach attendees to sonify their work by triggering, generating, and processing sonic textures or musical forms through interactivity. I covered several basic programming techniques for including sound in projects with input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances.

During the workshop I covered sonification examples in Processing.org and Max while looking at input signals from the Leap Motion, MIDI controllers, video camera, microphone, keyboard, and trackpad. We experimented with recording, looping, reversing, pitch shifting, and granulating sampled audio. We also looked at modeling waveforms and processing them through lowpass, highpass, bandpass filters, delays, and reverbs. Finally we looked at the convolution reverb in Max for Live trying out several of the IRs as well as discussing the technique of sampling impulse responses.

In this video I asked the attendees to pull out their headphones cords after completing the task of triggering sounds with a moving object. The resulting cacophony in the room was quite beautiful! I thoroughly enjoyed giving this workshop and would love to do it again. Please be in touch if you’re part of an organization interested in a workshop like this. For more information you can view the slideshow for the workshop at instint.johnkeston.com. Keep in mind that the slide show just a fraction of the activities. Most of the time was spent applying code and examples to either trigger, generate, or process sound.

Art + Music + Technology

art+music+tech2
art+music+tech

Recently I had the honor and pleasure of having a discussion with Darwin Grosse for his podcast Art + Music + Technology. If you’re not familiar with his interviews I suggest that you check out his program. Darwin’s straight forward conversations with a broad range of media artists seem to fill a void that no other programs do. It’s hard to single out any of the programs specifically because they are all entertaining (and educational), but some of my favorites (sorted alphabetically) include:

Brian Crabtree
Richard Devine
R. Luke DuBois
Mark Henrickson
Andrew Kilpatrick
Keith McMillen
Ali Momeni
Pauline Oliveros
Gregory Taylor
David Zicarelli

How Do You Do Your Live MIDI Sequencing?

Arturia BeatStep Pro

While advancements in music technology have led to amazing new instruments, some popular musical devices and applications fail to accommodate musicians with rudimentary to advanced skills in traditional techniques. Don’t get me wrong! I am all for making music technology accessible to the masses. However, with the inclusion of a few key features these devices and applications could not only be good fun for those without formal music education, but also useful for those with it. Furthermore, including those features would encourage non-traditional musicians to develop new techniques and expand their capabilities, knowledge, range, and interaction with other musicians.

SimpleStepSeq

One example of this is the step sequencer. Once again, don’t get me wrong! I love step sequencing. I even built a rudimentary step sequencer in Max back in 2009. Later on I made it into a Max for Live device that you can download here. Step sequencers are everywhere these days. At one point I remarked that it’s hard to buy a toaster without a step sequencer in it. To date that’s hyperbole, but step sequencers have become ubiquitous in MIDI controllers, iPad apps, synths, drum machines, and modular systems.

I love step sequencers because they encourage us to do things differently and embrace chance. However, for pragmatic music making anyone with some basic keyboard technique will agree that being able to record notes in real time is faster, more efficient, and more expressive than pressing them in via buttons, mouse clicks, or touch screen taps. Simply including a real time record mode in addition to the step sequencing functionality would improve the demographic range and usability of these devices and applications. Many instruments already do this. Elektron machines all have real time recording, as does the DSI Tempest (although it lacks polyphonic recording). Arturia has gone a step (pun intended) in the right direction with the BeatStep Pro allowing for real time recording, also without polyphony. Also, most DAWs handle real time MIDI recording beautifully. So if all of these solutions exist, what’s the problem?

For the last five years I have been developing ways to perform as a soloist without the use of a laptop computer. Q: Wait a minute, don’t all those machines you’re using have computers in them? A: Yes, but they are designed as musical instruments with tactile controls and feedback. They also rarely crash and don’t let you check Facebook (yes, that’s an advantage). There’s a whole series of arguments both for and against using laptops for live performance. Let it be known that I have no problem with anyone using laptops to make music! I do it in the studio all the time. I may do it again live at some point, but currently I have been enjoying developing techniques to work around the limitations that performing without a dedicated computer presents.

Cirklon courtesy of Sequentix

These performances include two to five synchronized MIDI devices with sequencing capabilities, buttons, knobs, pads, and/or a keyboard. I may start with some pre-recorded sequences or improvise the material, but usually it’s a combination of the two. As a musician, producer, and sound designer I have been collecting synthesizers for years and have no shortage of sound making machines. What I am lacking is a way to effectively and inexpensively manage sequencing my existing hardware in real time and with polyphony for live performances. Solutions that do more than I need and therefore cost more than I’d like to spend include the Sequentix Cirklon and Elektron Octatrack. There are also vintage hardware solutions like the EM-U Command Station or Yamaha RS7000. This is something I’ll investigate further, but usually they are bulky and difficult to program on the fly.

Pyramid euclidean screen

What I’d like to see more of are small, modern devices that push the capabilities of live sequencing into new realms while maintaining the practical workflow techniques trained musicians rely on. It’s happening to an extent and internally on the Teenage Engineering OP-1 with their frequent firmware updates. It’s happening on a few iPad apps, but most of the MIDI sequencing apps still lack real time recording and/or polyphonic recording. The Pyramid by Squarp is the most promising development I have seen in this department recently (more about Pyramid at a later date, but for now read this from CDM). Have you found a device or app that handles all your MIDI needs? Do you know about something on the horizon that will make all your MIDI dreams possible? What devices do you use manage your live MIDI performances?

SoundsCloud Flashback: Music for People on Shelves

peopleonshelves

I used Ableton Live to produce in real-time and my wavetable glitch machine Max patch to make most of the noises, which I routed into Live using Soundflower.

This five year old set is one of the very first things I ever posted on SoundCloud and it’s 86 minutes from a live solo performance with Minneapolis Art on Wheels. Checkout the original posts here:

Video documentation:
audiocookbook.org/people-on-shelves/

The original article:
audiocookbook.org/music-for-people-on-shelves/

Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading