Remarks on “Technology & Creativity”

I wrote the following as introductory remarks to the research session at the University Of Michigan’s “Technology & Creativity” summit on March 5, 2024. This piece of writing is based on my interviews and research on the subject of classical composers and experimental musicians who have used A.I. in their creative practice since the 1950s.

Following the transcript, I’ve included useful links for that my workshop, “Making Music with A.I.: Basics and Precedents”.

Hello — to begin, I would like to thank Mark Clague from ARIA and Jing Liu from MIDAS for organizing today's events and inviting me to give these remarks to open this special session focused on new research involving A.I. and creativity. 

My name is Garrett Schumann, and I am a Lecturer here at the University of Michigan who teaches primarily within the Applied Liberal Arts area of the College of Literature, Science & the Arts. I am so excited to talk to all of you and to learn more about the expertise, ingenuity, and enthusiasm you bring to your work with A.I. on campus.

I am a composer, not a computer scientist; so, I come to this topic from an observer's perspective. Last year, I wrote features about musicians who use AI in their work for The New York Times and Chamber Music, a nationally-distributed magazine published by the nonprofit, Chamber Music America. I interviewed and researched dozens of experts from around the world whose work with and thoughts about the state of artificial intelligence deserved celebration as well as urgent attention, particularly at a time when the marketing surrounding public-facing, generative tools dominates discourse on this subject. 

The more I learned about the history of A.I. in music and other creative applications, the more clear it became that, while the technology has become much more powerful and accessible in recent years, the questions it inspires have not changed very much in the last several decades. Appropriately for today, my writing on this topic underscored the special and fraught connection between, "technology & creativity," that artificial intelligence illuminates. 

Since the 1950s, musical experimentation with A.I. has challenged existing understandings of practice and raised important implications regarding authorship, power, and authenticity. Foundational to the specific questions confronted in individual research projects has been the technology's impact on people, collaboration, and the role humans play in the creative process. 

In 1956, when University of Illinois faculty members Lejaren Hiller and Leonard Isaacson first programmed a supercomputer to algorithmically generate new musical ideas, the two had no choice but to intervene heavily and 'translate', in a manner of speaking, the computer's outputs into a notated musical composition for string quartet titled, Illiac Suite. Meaningfully, this project was highly collaborative and interdisciplinary, and its mission was simple by contemporary standards: can a computer independently produce performable musical materials if it operates within a tightly controlled, rules-based system?

Three decades later, when the composer George Lewis debuted Voyager — a sophisticated and interactive complex of algorithms he describes as, "a nonhierarchical musical environment that privileges improvisation" — his goals and tools are much more advanced. Unlike Lejaren and Hiller before him, Lewis is able to program Voyager himself. But, similar to Illiac Suite, Lewis' work remains fundamentally collaborative. 

In Voyager, the computer interprets music made by human co-performers in real-time and generates new material in response to that input. In this case, Voyager's existence as a participant in live performance allows Lewis to explore new questions about technology's relationship to human creativity: how does group improvisation change when one of the performers is a computer? Can artificial intelligence help musicians trained in the European musical tradition achieve, 'multi-dominance', an aesthetic state associated with African musical practice?

The 1980s saw two other other distinctive experiments that, unlike Illiac Suite or Voyager, worked to expand creativity on an individual basis. While working at Bell Labs in 1980, composer Laurie Spiegel developed A Harmonic Algorithm with the aim of obviating the corporeality of human creativity. As Spiegel describes in a program note, "if instead of composing individual finite length works, a composer could encode in computer software their personal compositional methods…then they could go on composing and generating new music long after the biological human had ceased to exist."

The next year, David Cope, a composer who teaches at UC Santa Cruz, began working on what would become his book, Experiments in Musical Intelligence, which details some of the first applications of machine learning in music. Similarly to Spiegel, Cope's initial goal was to train a computer to reflect his personal compositional style and creative preferences through a sophisticated process of deconstruction, pattern recognition, and recombinancy. 

Ultimately, Cope moved beyond an exploration of how this type of artificial intelligence could supplement his individual musical practice and applied the technology to well-known, historical composers' music, producing new compositions based on the software's comprehensive analysis of their style. While the Illiac Suite, Voyager, and A Harmonic Algorithm all challenge audiences to consider the expressive 'authenticity', so to speak, of computer-generated music, Cope's work is some of the first that attempts to wholly supplant another person's creative voice.

In 2024, the question of incorporating computer-generated ideas into musical practice is much broader, as almost anyone can access and use AI tools with ease without computer programming skills. Please attend either my, or my colleague Brian Miller's, 'Making Music with A.I.' workshop later today to experience this state of affairs first hand.

The technology's current scale and power also increases its capacity to imitate people, not only a specific musicians' style but also an individual's vocal performance. The people I talked to last year — Brown University professor Dr. Enongo Lumumba-Kasongo, IRCAM researcher Dr. Antoine Caillon, and Cal Poly Pomona professor, and UM alum, Dr. Isaac Schankler, among others — are less concerned with what these tools can do and more attentive to how they are used and why their application to music is justified with respect to resource allocation, creative systems, and general ethics

The projects that emerge from this room will certainly deal with these questions and quandaries in their own way, but they will also carry the bewildering excitement of potential. At a time when independent engagement with this technology is largely limited due to the consolidation of control over access to necessary computational resources, it is thrilling to have the University of Michigan invest in opportunities like this one in which new, creative, and collaborative research can begin.

Thank you.

Overview of ‘Making Music with A.I.: Basics and Precedents’ workshop on March 5, 2024:

This workshop began with a short introduction the described key takeaways from my experience writing about musicians’ work with artificial intelligence. Something I emphasized, and then returned to throughout the session, is the problem of creating compelling musical works from A.I.-generated materials.

These tools may easy-to-use and impressive in their signification of advanced computation, but those factors do not yield compelling musical statements on their own. A common refrain in the workshop was, “how can we make this interesting?".

We began with Holly+, a wildly accessible A.I.-generated synthesis model of composer/vocalist Holly Herndon’s voice. Workshop participants used this link to access a Google Drive Folder with example audio files (UM authentication required), which they downloads and dropped in the brower page for Holly+, producing newly transformed audio files.

I began this activity with a demo example that used the isolated vocals of Canadian singer Kiesza performing the song “Hideaway” (2011), which I made using a different open access, web-based A.I. tool. But, all the examples I shared with participants were of instrumental compositions.

This led to a lively discussion oriented around the question of, “why would you put audio of instrumental music into a vocal synthesis model?”. I argued that it is more interesting, creatively, to experiment with a transformation of timbre (changing instrumental music to vocal music) than replicating something that has already been recorded by a vocalist, an idea the composer Isaac Schankler shared with me when I interviewed them for my June 2023 feature in The New York Times on this subject.

The rest of the activities used the popular, free browser-based DAW program Bandlab. At the end of the workshop, I discussed Bandlab’s newer A.I.-powered features, namely ‘SongStarter’, which generates 30 seconds of music based on lyrics the user submits or a stylistic prompt. SongStarter is the most powerful of all the tools I shared with the participants in this workshop, but, even still, it is extremely limited.

Bandlab only offers a handful of stylistic prompts from which the user can select. All relate to some kind of popular music, and there is no ability to customize the output until it has already been created. Also, the program’s accuracy in representing specific styles left a lot to be desired: I chose “pop” and the result sounded more like “Lo-Fi” (a different default option).

Before this closing activity, I had participants prompt ‘UM-GPT’, the University of Michigan’s proprietary text generator, to create melodies in specific styles using ABC notation (a textual representation of MIDI data). We then used one of the text-to-MIDI conversion options listed on this Github page to ‘translate’ the ABC notation into a MIDI file, which we imported into an empty Bandlab project for performance.

Procedurally, this activity was the most involved, but the outputs were certainly the least interesting and the least useful creatively. The text generator’s understanding of musical style was nearly nonexistent, and the monophonic melodies it produced were barely useful from a creative perspective. Yet, it would not surprise me if apologists for A.I. were more impressed by this activity than the others.

Previous
Previous

Julia Perry in New York

Next
Next

The Problem With “Chill”