top of page

Three Entistatios

Three Entistatios for 12-part chamber ensemble forms a snapshot of my research into incorporating artificial intelligence-generated musical material into a concert work. It also demonstrates a structural and narrative direction I have since developed further in my work: using the process of machine learning as a map for the structure of a piece of music. I worked on this project in the first six months of 2019, with the work itself premiered in June 2019. It resulted in not only this piece of music, but also a prototype methodology for using symbolic-generative AI in music that did not rely upon simply incorporating the generations of a machine learning algorithm verbatim.

The work is divided into three movements, each of which approaches machine learning in a slightly different way. This post will provide a brief overview of the research that went into the work, but for more information please refer to the sources at the bottom.

In my experience, most music-generating algorithms are generally interested in one thing: whether the algorithm can successfully imitate a style or composer to the extent that human listeners cannot tell the difference​. This inevitably involves training an algorithm until it cannot learn any more. In short, many articles and papers focus on the fully-trained algorithm and are less interested in the process of getting there. I wanted to challenge this notion of the fully-trained algorithm being the most useful stage of machine learning in this piece. To achieve this, I used different algorithms stopped at different points in the learning process.

 

The first movement focusses on incorporating material generated at the start of a training process, before the neural network has solidified its understanding of patterns, structure, line and harmony in music. It utilises an algorithm called "Clara", created by Christine Payne, and it is trained on the same dataset of J.S. Bach's music that I used in an earlier project for a piece called "Turing Test//Prelude", which was implemented for me by Parag Mital. The generations produced by Clara were interesting in many ways, and I was able to analyse them to identify shared qualities, which I expanded upon by composing many more "generations" of my own. With this collection of musical cells, some composed partially or mostly by AI, some entirely by me, I developed compositional methodologies for organising, layering and collaging the results so that the movement's global structure might audibly represent the chaos and uncertainty of each individual cell.

The second movement utilises MuseNet, also created by Christine Payne (this time published by OpenAI). MuseNet is a general-purpose music model that we fine-tuned on my own music by providing a dataset of my scores. I was surprised and delighted to find that MuseNet was able to pick up on many aspects of my compositional style and filter them through its own lens, but also was completely unable to emulate some areas of music (such as structure or melodic line and coherence) that I consider basic. The second movement consists of a conversation between myself and the algorithm, each providing the ends to one another's sentences.

The final movement stems from imagining the difference of sophistication and coherence between the untrained Clara and the more fully-trained MuseNet as if a line on a graph, and then extending that line far into the distance. What might the inevitable sound of an algorithm be if they continue to develop in the same way? The movement takes just one idea - a twenty note cell - and repeats it twenty times, each time becoming faster and quieter. This single-minded obsessiveness seemed to me to be the antithesis of the first movement's capricious indecisiveness.

Sources

My chapter on AI and Music [[upcoming publication]]

Pre-premiere interview

Interview on Music Matters (I am around 40 minutes in)

MuseNet

Clara

Example of cells.PNG
bottom of page