John Keston Performance at Echofluxx14

E14banner

I’m am very excited to be performing at Echofluxx14 this May 7 in Prague. My performance is a couple of weeks after my presentation at Moogfest in Asheville. At Moogfest I’ll be presenting the software that I have been developing for my Echofluxx performance. It’s a Max/MSP application that does audiovisual granular synthesis. The application allows a performer to apply granular synthesis to sound and corresponding video using a touch interface. The audio and video are accurately synchronized creating uncanny effects. The software also has the capability to capture and repeat gestures so that the performer can accompany the projections with multiple layers and arrange compositions in a performance setting. My performance will include several movements that granulate everyday sounds and images and then contrast them with tones produced using analogue synthesizers. Video documentation is upcoming.

My Echofluxx performance was made possible by a grant from the American Composers Forum with funds provided by the Jerome Foundation.

Multitouch on a Mac without a Touchscreen

fingerpinger

As you may have noticed it’s been a little quiet around here lately. The reason for this is that since the end of 2013 I have been keeping myself busy with a studio remodel (more on that later) followed by concentrating on preparations for a performance (I’m pleased to announce) at Echofluxx 14 in Prague, May 2014. Here’s a quick note about Echofluxx 13 from their site:

The Echofluxx 13 is a festival of new media, visual art, and experimental music produced by Efemera of Prague with Anja Kaufmann and Dan Senn, co-directors. In cooperation with Sylva Smejkalová and early reflections, the Prague music school (HAMU), and Academy of Fine Arts (AVU), Echofluxx 13 will present international and Czech presenters in a five-day festival at the Tracfačka Arena in Prague, Kurta Konráda 1, Prague 9, May 7-11, 2013. For more information contact: info@echofluxx.org

I’ll discuss more details about this upcoming performance in another post. For now I would like to bring attention to the possibility of using the Mac trackpad and/or Apple’s Magic Trackpad for multitouch. My performance at Echofluxx involves using custom built software to loop granular audiovisual media. This idea evolved from projects in the past that used a 32″ touchscreen. This time the media will be projected, so I naturally decided to use the iPad as the controller. I built the project using Cycling ’74 Max and MIRA, which was very convenient, but I couldn’t get over the latency using the iPad over WiFi for multitouch controls.

I decided that the most convenient alternative would be to use the trackpad on the Mac laptop. Max has an object called “mousestate” that polls button-status and cursor-position information from the default pointer device. However, it is not designed to take advantage of multitouch data. This is where Fingerpinger comes in. Fingerpinger was able to detect ten independent touch points (perhaps more but I was out of fingers) on the built in trackpad on my MacBook Pro. Which begs the question; how did I take that screenshot?

Ten touchpoints on such a small surface is probably impractical, but I only need two; one for X-Y data and a second one for volume. Most importantly I wanted the audiovisual content to be activated simply by touching the trackpad rather than having to click or hold down a key. Fortunately Fingerpinger has a state value for each touchpoint that I was able to use to activate on touch and release. The latency is hardly noticeable compared to an iPad over WiFi, and I have also simplified my setup meaning I can travel with less equipment and rely on fewer technologies. I still like the idea of using an iPad for multitouch controls, mostly because of the opportunities for visual feedback. But for this particular application Fingerpinger is a great solution.

Machine Machine Touchscreen Instrument

Machine Machine (2013) is a 32″ touchscreen installation that functions as an electronic instrument. Granular synthesis is used to loop “grains” of sound and video at variable lengths and frequencies. These parameters are based on the y-axis of the touch point on the monitor. The x-axis determines the position of the grain within the timeline. The piece was exhibited last month at the Northrup King Building in Minneapolis during Art-a-Whirl and for Visual Storage; the MCAD MFA thesis exhibition.
Continue reading

Wired: Installation lets you remix actor’s face and voice

On Wednesday, November 16, 2011, Olivia Solon of Wired.co.uk wrote an article describing my piece, Voice Lessons. I have creativeapplications.net to thank for this one. Olivia found the article about my piece there and then emailed me to ask for a brief interview. We conducted the interview over email and the article was published the next day. Read the article by Filip Visnjic on Creative Applications Network. Read the article from Olivia Solon on Wired. Thanks, Filip and Olivia!

Video of Voice Lessons Touch Screen Installation

Voice Lessons is an electronic, audio device that interrogates the popular myth that every musical instrument imitates the human voice. Touching the screen allows the participant to manipulate the visuals and vocalizations of the “voice teacher” as he recites vocal warm up exercises.

The piece resides in the space between a musical instrument and voice lesson. Move the touch point left, right, up, and down to explore the visual and auditory possibilities. Rapid high pitched loops occur while touching near the top of the screen while lower pitched longer loops are heard near the bottom.

The actor, also named John Keston, is my retired father who became a voice teacher after a long career on stage in plays, operas, and musicals with the Royal Shakespeare Company in our native country England and abroad.

Voice Lessons
32” interactive touch screen installation
By John Keston 2011
Continue reading