Interview: Graham’s O’Brien’s Drum Controller Video Series

Drum Controller - Graham O'Brien

Graham O’Brien is an exceptional and inventive drummer, composer, and producer. It has been my privilege to play with him at dozens shows and on at least five separate projects over the last eight years. His latest solo endeavor is a series of five videos titled Drum Controller. Graham had discussed his goals for the project with me, but when I saw/heard the videos I was immediately impressed. I wanted to know more about how he was able to trigger these beautiful and complex electro-acoustic arrangements without touching anything other than his minimal kit of kick, two snares, high hats, and a ride.

Note: Graham will be performing music with his Drum Controller setup and Thomas Nordlund on guitar at Honey in Minneapolis this Sunday, June 5, 2017. Read on for the interrview and a look at his video series. Continue reading

Interactivity Sonified Workshop at INST-INT

The INST-INT 2015 conference, exploring the “…art of Interactivity for objects, environment, and experiences,” just happened and I had the honor and privilege of giving a workshop at the event titled Interactivity Sonified. The intent of the workshop was to teach attendees to sonify their work by triggering, generating, and processing sonic textures or musical forms through interactivity. I covered several basic programming techniques for including sound in projects with input devices and numerical values. Touchscreens, microphones, cameras, gyros, MIDI controllers, or any other stream or set of incoming data might be used to add sound. The sonification of this information adds a whole new sensory dimension to interactive installations and performances.

During the workshop I covered sonification examples in Processing.org and Max while looking at input signals from the Leap Motion, MIDI controllers, video camera, microphone, keyboard, and trackpad. We experimented with recording, looping, reversing, pitch shifting, and granulating sampled audio. We also looked at modeling waveforms and processing them through lowpass, highpass, bandpass filters, delays, and reverbs. Finally we looked at the convolution reverb in Max for Live trying out several of the IRs as well as discussing the technique of sampling impulse responses.

In this video I asked the attendees to pull out their headphones cords after completing the task of triggering sounds with a moving object. The resulting cacophony in the room was quite beautiful! I thoroughly enjoyed giving this workshop and would love to do it again. Please be in touch if you’re part of an organization interested in a workshop like this. For more information you can view the slideshow for the workshop at instint.johnkeston.com. Keep in mind that the slide show just a fraction of the activities. Most of the time was spent applying code and examples to either trigger, generate, or process sound.

How Do You Do Your Live MIDI Sequencing?

Arturia BeatStep Pro

While advancements in music technology have led to amazing new instruments, some popular musical devices and applications fail to accommodate musicians with rudimentary to advanced skills in traditional techniques. Don’t get me wrong! I am all for making music technology accessible to the masses. However, with the inclusion of a few key features these devices and applications could not only be good fun for those without formal music education, but also useful for those with it. Furthermore, including those features would encourage non-traditional musicians to develop new techniques and expand their capabilities, knowledge, range, and interaction with other musicians.

SimpleStepSeq

One example of this is the step sequencer. Once again, don’t get me wrong! I love step sequencing. I even built a rudimentary step sequencer in Max back in 2009. Later on I made it into a Max for Live device that you can download here. Step sequencers are everywhere these days. At one point I remarked that it’s hard to buy a toaster without a step sequencer in it. To date that’s hyperbole, but step sequencers have become ubiquitous in MIDI controllers, iPad apps, synths, drum machines, and modular systems.

I love step sequencers because they encourage us to do things differently and embrace chance. However, for pragmatic music making anyone with some basic keyboard technique will agree that being able to record notes in real time is faster, more efficient, and more expressive than pressing them in via buttons, mouse clicks, or touch screen taps. Simply including a real time record mode in addition to the step sequencing functionality would improve the demographic range and usability of these devices and applications. Many instruments already do this. Elektron machines all have real time recording, as does the DSI Tempest (although it lacks polyphonic recording). Arturia has gone a step (pun intended) in the right direction with the BeatStep Pro allowing for real time recording, also without polyphony. Also, most DAWs handle real time MIDI recording beautifully. So if all of these solutions exist, what’s the problem?

For the last five years I have been developing ways to perform as a soloist without the use of a laptop computer. Q: Wait a minute, don’t all those machines you’re using have computers in them? A: Yes, but they are designed as musical instruments with tactile controls and feedback. They also rarely crash and don’t let you check Facebook (yes, that’s an advantage). There’s a whole series of arguments both for and against using laptops for live performance. Let it be known that I have no problem with anyone using laptops to make music! I do it in the studio all the time. I may do it again live at some point, but currently I have been enjoying developing techniques to work around the limitations that performing without a dedicated computer presents.

Cirklon courtesy of Sequentix

These performances include two to five synchronized MIDI devices with sequencing capabilities, buttons, knobs, pads, and/or a keyboard. I may start with some pre-recorded sequences or improvise the material, but usually it’s a combination of the two. As a musician, producer, and sound designer I have been collecting synthesizers for years and have no shortage of sound making machines. What I am lacking is a way to effectively and inexpensively manage sequencing my existing hardware in real time and with polyphony for live performances. Solutions that do more than I need and therefore cost more than I’d like to spend include the Sequentix Cirklon and Elektron Octatrack. There are also vintage hardware solutions like the EM-U Command Station or Yamaha RS7000. This is something I’ll investigate further, but usually they are bulky and difficult to program on the fly.

Pyramid euclidean screen

What I’d like to see more of are small, modern devices that push the capabilities of live sequencing into new realms while maintaining the practical workflow techniques trained musicians rely on. It’s happening to an extent and internally on the Teenage Engineering OP-1 with their frequent firmware updates. It’s happening on a few iPad apps, but most of the MIDI sequencing apps still lack real time recording and/or polyphonic recording. The Pyramid by Squarp is the most promising development I have seen in this department recently (more about Pyramid at a later date, but for now read this from CDM). Have you found a device or app that handles all your MIDI needs? Do you know about something on the horizon that will make all your MIDI dreams possible? What devices do you use manage your live MIDI performances?

Slam Academy of Electronic Arts

I have recently accepted a position as an adjunct instructor at the Slam Academy in Minneapolis, Minnesota. With two Ableton certified instructors the school is offering a variety of classes in electronic music, but also stretching out into topics like Max for Live and music for video games. I will be teaching occasional master classes and private lessons that focus on my listed specialties of Max/MSP, Max for Live, Processing, sound synthesis, and jazz theory. Please checkout the school at Slam Academy, or like the Facebook page for more information.

Northern Spark In Habit: Living Patterns

Many of you know that I have been working on an eight channel, spatialized sound, projection, and dance collaboration for almost two years. I composed the music entirely using my collection of analog synthesizers. I also designed an octal sound system (eight discrete channels) to spatialize the music and sounds. The performances are Thursday, June 7 at 9pm, Friday, June 8 at 9pm and Saturday, June 9th from 9pm until 6am (yes that is 9 long hours). Checkout In Habit: Living Patterns for the location and other details.

What may be of particular interest to ACB readers is how I am processing the music for spatialization. The outdoor stage is a raised 18′ x 18′ square that the audience can view from any angle. At each corner I have outward facing wedges to project sound toward the audience. Behind the audience I have inward facing speakers on stands, also at each corner of the venue (a public space under the 3rd Avenue bridge in Minneapolis by the Mississippi river across from the St. Anthony Main Movie Theatre).

Using a Max for Live patch that I developed and another that is part of the M4L toolset I am able to rotate sounds around the system in many ways. This includes clockwise and/or anti-clockwise at variable frequencies around the outer or inner quads or both. I can also pan sound between the inner and outer quads with or without the rotation happening simultaneously. Quick adjustments allow me to create cross pans to for sweeping diagonals and so on. I originally thought I could do this with one of many M4L LFOs, but found out this would be impossible. In a future post I will explain why I had to develop my own patch to do this. For now, please enjoy a sadly two channel rough mix of Kolum, the second in the series of sixteen vignettes, and come to the performance to hear it in all of its spatialized, eight channel glory.