ISSTA 2019 Presentation and Performance

Currently I’m in Cork, Ireland to present and perform at the International Sound in Science Technology and the Arts (ISSTA 2019) conference. This year on Thursday, October 31st I am scheduled to give a paper about a project I have been working on titled IGNIEUS, then on Friday I will give a solo electronic music performance related to the paper. I will share more about this soon. For now you can find the program at ISSTA.ie.

This year the conference features a keynote talk and performance by Ableton Live co-creator, Robert Henke who will be performing his work Dust.

Dust is a slow and intense exploration of complex textural sounds, shredded into microscopic particles, and pulsating interlocking loops, recomposed during a improvised performance. The sources are leftovers of digital processes, material created with old analogue synthesisers, noises of all colours and flavours, field recordings; splashing waves from a shingle beach, captured on site in Australia, a massive storm, steam from my Italian coffee maker, crackles from the lead-out groove of a worn record, hum and electrical discharges from a large transformer, collected over several years, and refined and deconstructed in various ways.

I am pleased to be featuring the Organelle M from Critter & Guitari in the setup for my performance on Friday. I have had the instrument for about six weeks, which is long enough to just scratch the surface of the device’s capabilities. I’m also using the Bass Station II with the new 4.14 firmware. More to come!

Meta Composition Lets Audience Compose Text Scores

icmlm_screens_1

Now that I have announced my upcoming project Instant Composer: Mad-libbed Music (ICMLM) it is only fair that I share a little bit about the thought process and inspiration behind the piece. The inspiration comes from Pauline Oliveros’ instructional scores, sonic awareness, and deep listening practice. Oliveros explains in a very matter-of-fact fashion in an interview with Darwin Grosse that her text scores are instructions for the musicians or a soloist to follow. Often allowing for broad interpretation and improvisation, the scores rarely include musical symbols or notation.

Much of my own recent work involves the exploitation of chance: duets with traffic, trains, and the Singing Ringing Tree for example. ICMLM surrenders chance to the audience by resigning the writing to minds free of the context concerning the concept, preparations, and development of the “outer composition.” In this way ICMLM is a meta composition that allows the audience to compose within parameters predefined by the artist. However, the limitations placed on the compositional tool provided are not meant to confine participants.

icmlm_screens_2

The most simple implementation of this concept would be a text area where the author writes whatever they want. I didn’t do this in part because I wanted to make the process engaging, inviting, and user friendly. It is not my intent to intimidate the audience. This is an experiment and we will not dismiss what anyone chooses compose for the ensemble. The process of composing happens within a webapp allowing the composer to specify instrumentation, tonality, dynamics, mood, tempo, length, title, and author. All the choices aside from instrumentation and length can freely be entered as any word or phrase the author chooses. In some cases optional choices are offered from a context sensitive menu, but in “mood,” for example, the author must use their own words.

What this means for the “outer composition” and the ensembles constructed for each piece is that the scores are almost entirely unpredictable. Scores might take the form of a Mad Lib when the author chooses to insert nonsense or humorous terms and phrases. On the other hand fascinating challenges might arise as thoughtful and provocative language is used to inspire the improvising musicians. Whatever happens a large part of the motivation and excitement about this project for me is not knowing what will happen until the piece is performed. I am looking forward to collaborating with the minds of our audience through the musical and sonic interpretations of their ideas.

Instant Composer: Mad-libbed Music

Instant Composer: Mad-Libbed Music

Northern Spark 2015 is June 13, 2015 and once again I am excited and honored to be taking part. This year I am directing and producing a project designed to directly involve audience participation in an all night long musical performance piece. Instant Composer: Mad-libbed Music (ICMLM) is in collaboration with a group of my interactive media students at Art Institutes Minnesota and an ensemble of improvising musicians from the Minneapolis and St. Paul, community.

ICMLM gives the audience a visceral connection to the ensemble because they will be choosing the members and composing the music! We have designed a mobile web application that allows the audience to write a piece of music that will be played by an ensemble within minutes of composing it. The compositions are textual or instructional scores popularized by Pauline Oliveros.

Pauline Oliveros is an American composer and accordionist who is a central figure in the development of experimental and post-war electronic art music … Oliveros has written books, formulated new music theories and investigated new ways to focus attention on music including her concepts of “Deep Listening” and “sonic awareness”. — Wikipedia

In five easy steps visitors will write their piece and submit it for its debut. The app allows participants to choose the instrumentation, tonality, dynamics, tempo, and length without needing to know any musical terms or techniques. The scores are like a Mad Lib, so we anticipate humor and transgressive play, but this will only make it more challenging and interesting for the ensembles.

The event is being held rain-or-shine on June 13, 2015 inside the historic Mill City Museum on the banks of the Mississippi river in downtown Minneapolis. It is free and open to the public and runs from dusk until dawn (9:00pm until 5:26am).

Participating students include: Ariel Marie Brooks, Michael Brooks, Renae Ferrario, Meg Gauthier, Abram Long, Valeria C. Sassi, Adam Schmid, and Steven Wietecha. The musicians include: Chris Cunningham (guitars), Jon Davis (bass, bass clarinet, saxophones), DeVon Russell Gray (bassoon, keyboards), Rajiah Johnson (flute), John Keston (keyboards), Donnie Martin (violin), Thomas Nordlund (guitars), Cody McKinney (bass), Graham O’Brien (drums), and Adam Schmid (drums).

Musical Synthesis and Sonic Environments

Architectural Drawing of the SRT from Tonkin Liu

I am quite honored to have an article about my recent work published by the American Composers Forum (ACF). The article was written by ACF member Timothy Hansen and is available here. The focus of the piece is on my duets with the Singing Ringing Tree. From the article:

On a bare hill overlooking the village of Burnley in Lancashire, England, stands the Singing Ringing Tree, an array of galvanized steel pipes stacked in a swirled sculpture to resemble a stylized broad-boughed tree. Standing alone on this otherwise empty hill it is visually striking enough, but it’s when the wind picks up that the Singing Ringing Tree’s true purpose is revealed. A haunting chorus of hollow, almost ghostly tones fills the air, making the open sky seem wider than before, stretching from horizon to horizon over a broad, clear landscape: the Tree and its disembodied chorus starkly underlines that, here, you are alone.

This concept of an artificial “sonic environment” was arguably born through the work of John Cage, perhaps the first and fiercest proponent of listening to one’s surroundings as music. His infamous 4’33” kick-started a whole branch of composition where “non-musical” environmental sounds become an integral part of the piece.

British born John Keston is one of Cage’s modern-day disciples. Cage had already been a longtime influence on Keston when he commenced his masters program at Minneapolis College of Art and Design, but while at MCAD, Keston began to move beyond simply listening to his environments as sources of music and started considering them as collaborative partners. Armed with a synthesizer, he began to create a series of sonic environment duets.

“I started these duets close to home in Northeast Minneapolis,” explains Keston. “My neighborhood is crisscrossed with railways, rail bridges, and rail yards. I found that I could coax music from everyday ambience by emphasizing rhythms and textures with a portable synthesizer.” Once he had exhausted the possibilities of his local neighborhood he began to search for, as he describes it, “more exotic locations.” This was how, in 2014, with the help of a grant from the Jerome Fund for New Music, Keston found himself seated at the foot of the Singing Ringing Tree, ready to create a series of new duets with his strange, lonely collaborator.

“I did not compose any music ahead of time,” says Keston. “I knew that I needed to experience the Singing Ringing Tree in the flesh to legitimately collaborate with it. The music from the Tree can change dramatically by the minute. On one of the five days I was there it was mute when I arrived. A few hours later it began to sing quietly as the wind picked up. My approach was to let myself react to what it did from one moment to the next. There was no way to direct my collaborator. This was liberating because I could only accept, appreciate, and respond to its performances.”

Keston’s sonic environment duets are especially unique to his practice due to a lifelong fascination with synthesizers. “When I was ten my Dad brought home two records by Isao Tomita: Firebird and Pictures at an Exhibition,” Keston recalls. “I was immediately fascinated by the sounds on the recordings. The album cover of Pictures at an Exhibition showed the room sized Moog modular synthesizer that Tomita was using. The images of the mysterious technology and the fantastic sounds spurred my curiosity. Later as a teenager living in the States I managed to buy my first synth; a Moog Rogue with a broken key.”

Today, synthesizers are an integral part of Keston’s practice, which draws from the gamut of music technology and new media. Keston also has a background in software development, enabling him to build software and hardware from scratch to serve his artistic goals. But his motivation for creating such work goes beyond artistic impulse: Keston believes his work serves to humanize music technology. Keston explains:

“If we are going to use technology to create art then I feel it is necessary to inject the human engagement of technology transparently into the work in order for it to reflect the contemporary human condition. If not, then the art might be mistaken for art created by machines rather than art created by humans with the aid of machines. Don’t get me wrong. I am fascinated by algorithmic music, and the idea of art created by artificial intelligence. I look forward to experiencing art that is fashioned entirely by AI. Duets with the mechanical environments we live in using electronic devices to mimic or contrast the sonic landscape reflect the ongoing amalgamation of people with technology.”

Contributed by Timothy Hansen

Please read the short piece at ComposersForum.org. During the interview for the articles I was asked some interesting questions that didn’t make it into the the final draft. I’ll share some of those answers in upcoming posts.

Real Orchestra vs Synth Mockup – Part 4/6

IMG_1415

This is the fourth part in a small series of blog posts I’ll make about the real-world differences between orchestral mockups (or synth orchestras) versus real orchestras. As a composer who is fortunate to work regularly with live orchestras, I’ll try to help show the difference from a decent demo recording, to a mixed and mastered finished recording. For this example, I’ve chosen an exciting track from my album “Resonance Theory” called “Speed”. The sixteen-strong cello & bass section hated me after this, and you’ll see why!


 
Continue reading