Vocalise Sintetica at Echofluxx 14, Prague

On May 7, 2014 I performed Vocalise Sintetica at the Echofluxx Festival in Prague. The piece is made up of four movements: I. Machines (00:00), II. Liquid (18:43), III. Vocalise (28:55), and, IV. Sintetica (38:41). Each movement is a playlist of five audiovisual objects that are instantly available to be projected and amplified while being granulated in real-time by a performer using a multitouch interface. The performer may loop their gestures applied to the audiovisual objects in order to bring in additional synthesized sound layers that contrast or mimic the audiovisual objects. My performance at Echofluxx was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.
Continue reading

Audiovisual Granular Synthesis of Water Objects

This is a screen capture from a Max project I developed that does interactive, synchronized, granular synthesis of corresponding sound and video that’s tentatively titled AVGM for Audiovisual Grain Machine. I have used the software during aperformance at the Echofluxx festival in Prague and at the Katherine E. Nash gallery for the opening of The Audible Edge exhibition during Northern Spark 2014.

AudioCookbook at Moogfest 2014!

Logo_680x237

I am pleased to announce that I will be presenting at Moogfest this year! I will stand humbly along side Yuri Suzuki, Felix Faire, Yoon Chung Han, and Scott Snibbe participating in “an afternoon exploring alternative interfaces for sound generation and manipulation, and the future of visual music” programmed by Eyeo Festival organizers including industry visionary Dave Schroeder. Checkout the Moogfest site for more details.

Multitouch on a Mac without a Touchscreen

fingerpinger

As you may have noticed it’s been a little quiet around here lately. The reason for this is that since the end of 2013 I have been keeping myself busy with a studio remodel (more on that later) followed by concentrating on preparations for a performance (I’m pleased to announce) at Echofluxx 14 in Prague, May 2014. Here’s a quick note about Echofluxx 13 from their site:

The Echofluxx 13 is a festival of new media, visual art, and experimental music produced by Efemera of Prague with Anja Kaufmann and Dan Senn, co-directors. In cooperation with Sylva Smejkalová and early reflections, the Prague music school (HAMU), and Academy of Fine Arts (AVU), Echofluxx 13 will present international and Czech presenters in a five-day festival at the Tracfačka Arena in Prague, Kurta Konráda 1, Prague 9, May 7-11, 2013. For more information contact: info@echofluxx.org

I’ll discuss more details about this upcoming performance in another post. For now I would like to bring attention to the possibility of using the Mac trackpad and/or Apple’s Magic Trackpad for multitouch. My performance at Echofluxx involves using custom built software to loop granular audiovisual media. This idea evolved from projects in the past that used a 32″ touchscreen. This time the media will be projected, so I naturally decided to use the iPad as the controller. I built the project using Cycling ’74 Max and MIRA, which was very convenient, but I couldn’t get over the latency using the iPad over WiFi for multitouch controls.

I decided that the most convenient alternative would be to use the trackpad on the Mac laptop. Max has an object called “mousestate” that polls button-status and cursor-position information from the default pointer device. However, it is not designed to take advantage of multitouch data. This is where Fingerpinger comes in. Fingerpinger was able to detect ten independent touch points (perhaps more but I was out of fingers) on the built in trackpad on my MacBook Pro. Which begs the question; how did I take that screenshot?

Ten touchpoints on such a small surface is probably impractical, but I only need two; one for X-Y data and a second one for volume. Most importantly I wanted the audiovisual content to be activated simply by touching the trackpad rather than having to click or hold down a key. Fortunately Fingerpinger has a state value for each touchpoint that I was able to use to activate on touch and release. The latency is hardly noticeable compared to an iPad over WiFi, and I have also simplified my setup meaning I can travel with less equipment and rely on fewer technologies. I still like the idea of using an iPad for multitouch controls, mostly because of the opportunities for visual feedback. But for this particular application Fingerpinger is a great solution.

Duet for Synthesizers and Mobile Conductor (2013)

Duet for Synthesizers and Mobile Conductor is a piece composed and performed by John Keston in collaboration with David T Steinman who also performs in the piece as the mobile conductor. Steinman creates a real-time audiovisual score that is broadcast into the performance space from a remote location. This score consists of textural, atonal, and arrhythmic “sound features” produced with artifacts from Steinman’s apartment. The imagery and amplified sound become content within the music as it is interpreted through improvisations by the synthesist, John Keston. Keston accompanies the sound features while controlling three analogue synthesizers (Novation Bass Station II, Korg Monotribe, and Korg Volca Keys). This use of an audiovisual score is a means to harness the sensory influence of non-musical sounds and images in our environments, elevating these sources to compositional structures.

Duet for Synthesizers and Mobile Conductor was performed on November 7, 2013 at the Strange Attractors festival, St. Paul, Minnesota. This video was captured during a private performance made shortly after the public showing. The piece is the first in a series of new Duets by Keston made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.

Duets Setup

The shot above shows the setup I chose to use for this project. Although it is possible to synchronize these instruments, for this piece I decided to run them independently creating poly-temporal accompaniment for the atemporal audio I received from Steinman’s mobile conducting. Multiple free-running clocks were involved. For example, on the Bass Station II there are two LFOs, BPM for the arpeggiator and sequencer, and the second oscillator can be routed to modulate the the filter frequency. Both the Monotribe and the Volca also have BPM for their sequencers and a free-running LFO. In addition the Volca and Memory Man delays produced unsynchronized repetitions. All of these independent time sources helped create chaotic, non-interlocking rhythms that mimic and/or contrast the audiovisual score.

Mobile Rig

The sound and video from the mobile conductor was broadcast via UStream using a Logitech Broadcaster camera. This technique makes it possible for the mobile conductor to choose content for the piece from anywhere with internet access and still perform in near real-time with the ensemble. This made our performances with DKO at Northern Spark 2013 and WAM Bash 2013 possible. It also means that the quality of the video and audio from the broadcast is limited. Other examples of Duets (Duet Under Bridge, Duet for Synthesizer and Spin Cycle, Duet for Synthesizer and Rail Cars) do not have this requirement and do-have/will-have better sound and video quality than the Instant Cinema series.