György Ligeti’s Artikulation (1958) Animated

I just came across this animation of one of György Ligeti‘s few electronic compositions Artikulation (1958). This animated sequence puts my collaboration with Piotr Szyhalski, Post-prepared Piano, in historical context. The visuals for Ligeti’s piece were created by Rainer Wehinger in response to the music, while in Post-prepared Piano (see the animated sequence) the visuals are converted directly into an electronic version of an existing work through a computerized process.

Sound Spheres – a psychospatial model

This is my first contribution to Audiocookbook, and I want to thank John for creating such a cool place to share. I  have wide ranging interests in audio that link to tech, brain, music, therapy, education and most everything that vibrates… it’s alive! (Or we can make it so with a bit of sound design.) So I plan to write some musings on these professional and creative explorations, with the hope that they will kick start a few inspirations in our community.

A lot of what I’ve written in my book “Sound Design” came from researching the best practices of the industry, scientific basis of hearing and music theory.  However, the more interesting approaches I found through personal observation and experimentation. One of those discoveries I call Sound Spheres, which I’ll introduce in this blog. (I wrote an in depth article which is posted on my website www.sounddesignforpros.com that you can read if you would like to get more details.)

Sound Spheres

If we consider the human experience of our environment from its most intimate to most external, a model of six concentric spheres can serve to describe the various levels of sonic information available.

  1. I think – Internal audio thoughts:  memories, daydreaming, dreams, mental rehearsal or notes to oneself, internal music.
  2. I am – Sounds created by ones own body:  heartbeat, breathing, mouth sounds (chew, cough, hiccup, sneeze, etc.), scratching, digestive sounds.
  3. I touch – Contact with the outside world that sets up sonic vibrations:  footsteps, manipulating tools, utensils, food, contact sports, typing.
  4. I see – Events, objects and actions in our field of vision that create sounds (equivalent to “on screen”):  people talking visibly, television, cars passing by, boiling teapot.
  5. I know – Sounds that have a reference to our environment or experience, but no visible source:  people talking outside our vision, crickets, radio music, wind.
  6. I don’t know – Unrecognizable sounds, out of sight. No examples of sources can be made, but the acoustic parameters (loud-soft, high-low pitch, short-long, etc.) and emotional qualities (soothing, scary, oddly familiar, weird, etc.) can be described.

Translating this model of perceptual reality to audiovisual media, sound can serve to intentionally manipulate the audience/listener in their physical and psychological orientation. Several examples are:

    • Moving from the inner to outer sound spheres will direct the attention of the audience from more personal contact with the character toward more awareness of the surrounding environment.
    • Contracting or expanding the number of spheres simultaneously present will limit or expand the attention demanded upon the audience. Limiting can help focus or create tedium. Expanding can help stimulate or create overwhelm.
    • Transitioning a single sound from one sphere to another can drive the drama. Very fertile ground for storytelling can be plowed with sound design creating tension, anticipation, release and surprise. Some possible movements between spheres:  I don’t know -> I see; I think -> I know; I touch -> I don’t know

TRY THIS:  Sit for 3 minutes and write down every sound you hear, associating it with a specific sound sphere. What informs you of your environment, what draws your attention, what creates a feeling or emotion?  Are there any sounds in the “I don’t know” sphere, and if so, what kind of reaction does this cause – curiosity, laughter, fear?  Note in particular what sound shift from one sphere to another. Where do you experience transitions, tension, build, climax and resolution?  How can this be used in a filmic scene to move story?

Coming up:  New audio game in development www.3DeafMice.com. Check out these rockin’ rodents!

Perihelion Dub

Here’s a piece dedicated to our planet’s recent astroid near miss and “unrelated” spectacular meteor explosion. I went back to my roots and produced some psychedelic, dub-delayed business with a little arabesque-miami-vice in the middle. Please enjoy responsibly.

DKO at FRANK Part 3: Everyday Music

This is another excerpt from a performance by DKO from the MCAD MFA open studio night on December 7, 2012. The document features Oliver Grudem (not shown) who produced the audiovisual score in real-time. The video and sound coming from the LED display and loud speaker below it was broadcast into the performance space as Oliver walked around the Minneapolis Uptown area during a snow storm. Listen for traffic, footsteps, car horns, and the occasional blurt of humans speech. The visuals and sound from his walk provided a “score” for the ensemble to respond to as we improvised. Oliver was also able to hear the musical reactions to the audiovisual score as he was broadcasting and respond accordingly.

The piece was recorded with my custom built binaural head microphone (Vincent) to capture the sound localization of the performance space. Remember that it is necessary to wear high quality, circumaural headphones to experience the binaural effect. While watching, imagine you are in the same position as Vincent. You should hear the bass clarinet in your left ear, the Rhodes and synthesizers to the right and the drums and video sound in front. The relative height of the sound should also be noticeable.