David Cope’s Emily Howell

I’ve just read a fascinating article about composer, David Cope, who is known for creating music in the style of Bach, Mozart, and others with software he developed called Emmy. I first heard David Cope’s work on Radiolab, and was intrigued by his approach. The article, Triumph of the Cyborg Composer, discusses his latest application titled Emily Howell. Cope is using the computer in a more collaborative way to compose music in his own style with the help of his program. A couple of audio examples within the article illustrate the musical results.

Cope has received a lot of criticism regarding his work, including statements that his music lacks soul because it was written by a computer. But was it really written by a computer? I think a better term is generated. Cope wrote the software, so I would argue that the music generated by the software was ultimately written by the software developer. In this case Cope himself. In other instances I might argue that the music was created by the user of the software tool, rather than the developer of the software. It comes down to who is at the controls. What decisions are being made, and by whom, or perhaps what?

Since I’ve developed and am currently using software to perform and record generative music, I am curious about your opinions. You may have heard pieces on this site generated by the GMS. Perhaps you listened to the excerpts in the article. What do you think? Does music generated by computers lack soul? Does it diminish the human, communicative qualities contained in the work? Or, are we using computers simply as tools? Perhaps, as computers and software evolve we might begin to collaborate artistically with them rather than just use them slavishly. Based on Cope’s work and others, I believe that we are closer than we think to this becoming a reality.

This entry was posted in Audio News, GMS and tagged , , , , by John CS Keston. Bookmark the permalink.

About John CS Keston

John CS Keston is an award winning transdisciplinary artist reimagining how music, video art, and computer science intersect. His work both questions and embraces his backgrounds in music technology, software development, and improvisation leading him toward unconventional compositions that convey a spirit of discovery and exploration through the use of graphic scores, chance and generative techniques, analog and digital synthesis, experimental sound design, signal processing, and acoustic piano. Performers are empowered to use their phonomnesis, or sonic imaginations, while contributing to his collaborative work. Originally from the United Kingdom, John currently resides in Minneapolis, Minnesota where he is a professor of Digital Media Arts at the University of St Thomas. He founded the sound design resource, AudioCookbook.org, where you will find articles and documentation about his projects and research. John has spoken, performed, or exhibited original work at New Interfaces for Musical Expression (NIME 2022), the International Computer Music Conference (ICMC 2022), the International Digital Media Arts Conference (iDMAa 2022), International Sound in Science Technology and the Arts (ISSTA 2017-2019), Northern Spark (2011-2017), the Weisman Art Museum, the Montreal Jazz Festival, the Walker Art Center, the Minnesota Institute of Art, the Eyeo Festival, INST-INT, Echofluxx (Prague), and Moogfest. He produced and performed in the piece Instant Cinema: Teleportation Platform X, a featured project at Northern Spark 2013. He composed and performed the music for In Habit: Life in Patterns (2012) and Words to Dead Lips (2011) in collaboration with the dance company Aniccha Arts. In 2017 he was commissioned by the Walker Art Center to compose music for former Merce Cunningham dancers during the Common Time performance series. His music appears in The Jeffrey Dahmer Files (2012) and he composed the music for the short Familiar Pavement (2015). He has appeared on more than a dozen albums including two solo albums on UnearthedMusic.com.

5 thoughts on “David Cope’s Emily Howell

  1. machine based music sometimes lacks soul, yes. then again so can music composed by a human. i don’t think there’s an easy answer to this question. in my opinion it’s a matter of how artfully the machines are both constructed and utilized by the programmer/composer, and on the resulting composition.

    i was particularly interested by the section of the article in which Cope describes, “asking musical questions” of Emily. or writing smaller purpose-built programs based on a stricter set of instructions. then pairing down the result, much as a sculptor might eliminate the bits of marble that don’t belong.

    to me this is where the soul of computer music can come through. Cope seems to take on a sort of editorial role. in the end he is the one deciding which of Emily’s data-strings have something beautiful or moving to them, something worth recording or repeating. any other listener will have to make that decision too.

  2. Nicely put, Luke. You have a similar view to what I was trying to state in my post. I’m looking forward to hearing what happens when the machines become artists in their own right and no longer need our input or guidance to create meaningful work.

  3. I’m reading this with interest. As a software developer and a part-time composer, I do believe it is possible to program in some “soul”. If the extent, makeup, and pattern of what it is that we refer to as “soul” can be mapped out, that essence can be turned into an object – which in turn can be applied to the musical piece.

    Let’s put it this way: Emily’s data strings relies on Cope. Cope’s editing of Emily’s output makes Cope a “Template”. I would envision that the Soul Program would be able to create soul templates out of any individual’s musical styles or preference.

    In a way, we already have the tools out there. Itune’s system of rating music, and the way that the program keeps track of what you are listening to, or prefer to listen too out of your own database of songs can be a basis of such a program.

    …just a thought.

  4. Subandi, I would disagree. I spent several years in college studying music, mostly classical, but with some 20th century composers mixed in. I don’t believe that computer generated music can have soul because it, in and of itself, lacks emotion. If there is soul in this computer generated music, then it comes from the editor of the piece, not the computer.

Leave a Reply