top of page


Robert Laidlow, August 2022

This article originally appeared in Musical Opinion Magazine, Autumn 2022 Issue.

For the past four years, I have been investigating the intersection between artificial intelligence (AI) and the symphony orchestra. This has resulted in my composition Silicon (2022) for symphony orchestra and artificial intelligence, which I have written for the BBC Philharmonic, who have been my partners throughout this process alongside PRiSM (the centre for Practice & Research in Science & Music).

Since there are so many definitions of AI, I will provide an explanation of how the term is meant here. In short, AI is computer code that can learn how to complete certain tasks on its own. It’s used everywhere in our lives behind the scenes, from satellite navigation to targeted advertising and medical research. It’s also increasingly used in the arts, including music, to assist artists or even fully automate certain tasks. It doesn’t refer to a sentient, conscious computer (yet), but to an algorithmic methodology that increasingly informs the way our society and infrastructure are organised, both online and offline.

AI’s relationship to classical music currently stands at a fork in the road. In one direction, there is huge potential for this technology to revolutionise the way music can be performed, enjoyed, and composed. Silicon incorporates what I think are some of the most exciting of these possibilities. I see orchestral music as a fantastic space in which new ideas and developments can be explored, and later in this article I will detail some of this potential.

In the other direction, there is threat. The advent of AI, and the kind of society that is growing around such advanced technologies, poses an existential challenge to classical music. This doesn’t only mean that AI will be composing and performing music to replace humans, though this is also likely to be the case. Rather, AI threatens some of the fundamental tenets underpinning classical music in ways that no technology has done previously. There is a danger that soon classical music will be simply irrelevant - not because computers have automated the work away from humans but because what the genre stands for is not important in the age of AI.

Take the idea of authorship, for example. Classical music has always been fundamentally interested in who wrote the music. The label of genius is liberally applied to individual composers, justifying their inclusion in a programme and providing an enticing reason for audiences to attend.

AI, however, has already begun to challenge the notion of authorship. The art-creating AI ‘Dall-E’ has captured the imagination of both press and public, with wide global coverage and over a million people signed up to use it to create their own artworks. The user types in what they want to see, and ‘Dall-E’ uses AI to generate that image. It is only a matter of time before a similarly advanced and AI is designed to allow any user to easily generate their own music – I have even used several prototypes in my piece Silicon. Who is the author here? Is it the user, the team that created the AI, or the AI itself?

Even asking the question is the wrong approach. Supposing a single author could be determined, the nature of modern Internet-driven life is such that any art that people find meaningful is shared and reshared countless times until its provenance is utterly forgotten. This is seen in Internet memes, one of the most important and widespread forms of expression and communication on the planet. Everybody recognises Internet memes, but nobody knows who created them. Nobody cares who created them. Only the work remains: authorship does not matter.

Recently the classical music canon has come under heavy scrutiny for its focus on white, European, male composers to the exclusion of many others. Soon, however, even the idea of a canon of named composers (no matter who populates it) might be completely irrelevant to the way certain audiences understand art. There will always be many, of course, who do care – but can classical music truly thrive by appealing only to this (shrinking) minority?

The counterargument to this might be that even if AI can generate music, it cannot be creative in the same way that a human can be creative. Creativity is the realm of the human: at best, AI might imitate existing artists, but never exceed them. Mahler famously said that “a symphony must be like the world. It must contain everything” – this seems an impossible task for an AI to overcome, which by its nature does not understand the world but only specific statistical tasks.

This is certainly what Lee Sedol, the global number two for the game Go, thought when he began his famous set of matches against the AI AlphaGo in 2016. Go has long been considered a hugely creative games, requiring a kind of “innate” knowledge that occasionally manifests itself in moves known as “divine moves” – moves which ultimately win the game, but which even the player cannot explain at the time. Lee lost 4-1 and later retired from the game, commenting that “even if I become the number one, there is an entity that cannot be defeated”. In the second game, AlphaGo made its own divine move (the now famous “Move 37”) which, though at the time it baffled both Lee and commentators, secured its victory. A year later, AI defeated the world number one too.

This leaves me wondering when music will have its AlphaGo moment. There is a clear parallel between the concept of the divine move in Go and the sublime in classical music. The sublime, referring to greatness or beauty that is utterly incalculable and inimitable, has always been a key concept in classical music. Historic composers are often described as writing sublime music, but the idea also extends to performing musicians who are perceived to have something special about their playing or conducting. If an AI can learn, statistically, to make divine moves, there is no reason to believe it cannot learn in the same way to create sublime musical performances. This is a real challenge to classical music which relies so often on the unknowability of the sublime for part of its appeal.

These abstract threats to classical music are developing simultaneously with the “automation” threats that AI poses to all musical fields, including automatic generation of music and realistic-sounding performances by computers. Technology has been perceived as a challenge to classical music before; radio, recording technology and, more recently, sample libraries of orchestral sounds have all been flagged as threats to the classical status quo. But there is a crucial difference. These technologies challenged the way classical music was funded and disseminated, but they did not challenge the foundations upon which classical music stands. AI threatens to make classical music not only uneconomical, but pointless.

Ignoring it with a “keep calm and carry on” attitude will not avert its course. Of course, it’s not so black-and-white as “adapt or die” but being entirely left behind in this fast-changing world would certainly be bad news for festivals, orchestras, composers, and performers. These two futures of AI and classical music exist simultaneously, and it is our responsibility to ensure the potential wins out above the threat – that classical music is changed for the better by AI.

As a composer, I can’t comment on the best approaches for organisations to consider AI – though I’d love to see a “Technology Orchestra” project, a modern-day version of the many “Radio Orchestras” that came into existence in the 20th century.

Happily, there are many organisations already working on guiding the relationship between advanced technology and classical music to whom I can point. Recent examples include the Royal Opera House’s Current, Rising, a prototype of “hyperreality opera” with music by Samantha Fernando, the Barbican Centre’s Life Rewired season, and IRCAM’s work on AI for musicians. There are also my own partners in composing Silicon: the BBC Philharmonic Orchestra and the Royal Northern College of Music’s Future Music festival, now in its fourth year.

When planning and composing the three movements of Silicon, AI - the technology itself and its wider social ramifications - informed my work in two main ways.

First, I wanted to co-opt the kind of technology intended to replace musicians, transforming it into something that could instead empower them. At several points throughout the piece, musicians are given direct control of the kind of technology designed to replicate their own sound exactly. During the third movement, for example, the orchestral percussionists physically mute and un-mute, with their percussion mallets, an AI generating fake orchestral music.

In the first movement, an AI is used which is designed to imitate specific composers’ styles – in this case, Mozart. I had it generate some of the initial ideas used in this movement, which I then went on to orchestrate and develop into the parts the players perform. Using technology that could be used to silence orchestral players to instead give them more music to perform was, to me, a simple but powerful idea. When rehearsals begin, I’m looking forward to hearing how the players stamp their own interpretations onto this material, especially since they are so used to performing Mozart’s actual music.

Second, Silicon is intended to recontextualise the orchestra into a laboratory. Often the orchestra can feel like a sonic museum, showcasing old music on old instruments. I wanted to use AI to rethink what might constitute an orchestra, now and in the future. During the second movement, a brand-new AI-powered instrument is embedded within the orchestra, to be performed by the orchestral pianist. The instrument uses AI to transform one sound into another in real-time, resulting in timbres that can be beautiful, uncanny, absurd, and frightening. Composing this music was exciting and thought-provoking, as I not only had to come to grips with the possibilities and limitations of a new instrument, but also developed my understanding of how this technology could blend or disrupt the orchestral status quo.

In the third movement, we went one step further. Using thousands of hours of radio concerts from the BBC Philharmonic archive, we taught an AI how to create orchestral sounds of its own. This “AI orchestra” is heard simultaneously with the on-stage orchestra, both providing half of the music. Some of the most fascinating discoveries from this experiment lay in the AI imitating sounds from the radio broadcasts we wouldn’t consider music, such as the presenter introducing the orchestra, tuning, and audience applause. These artifacts were kept in alongside the more recognisably symphony material. It was a good reminder that the distinction between “music” and “sound” is, currently, a human-only notion.

The idea of an orchestral laboratory was not only realised through literal integration of technology. Sometimes principles of AI could be applied to the orchestra through purely acoustic means. An interesting facet of some AI musical research is that it does not automatically understand music to move through time, but rather sees music as a static object existing all at once (like a painting). It needs to be told what time is, and in which temporal direction the music travels. I wondered if, in some hypothetical future, we might hear AI-generated music that happens all at once (similar to Ablinger’s Weiss / Weisslich 22 which simultaneously compresses the symphonies of well-known classical composers into 40 seconds), or flows end-to-start, top-to-bottom, or another orientation distinct from the traditional start-to-end. Silicon’s first movement imports this flexible approach to time through the development of extended instrumental techniques that make the sounds of the orchestra appear to be in reverse.

In all cases, the exact relationship between orchestra and AI was not clear when I started work with the orchestra and in many ways the success or failure of these ideas and technologies will only be known after the premiere. This real sense of experimentation has been exciting, but we have only scratched the surface of the orchestra-laboratory in this piece.

This work began to make me understand how the classical music might function as a space in which to explore the relationship between people and technology. While classical music can explore any number of ideas, many issues raised by AI are related to specific issues that classical musicians already consider deeply and often. It seems to me that classical music has the potential to be both relevant and useful for new audiences looking to understand AI’s role in their own lives. The previously discussed notion of authorship, for example, is closely tied to the idea of authenticity, an area of lively debate within both classical music scholarship and performance (from historically informed performance to re-written cadenzas) and AI (from whether AI-generate art has any meaning whatsoever to the ethics of using AI to reanimate deceased actors in new films).

Another example is the modern attention span. AI is used to both shorten and lengthen the span over which we pay attention. AI is largely responsible for the modern attention span shortening due to social media websites, but it also plays a role in widening a kind of “scientific attention span” through analysing geological or cosmological data. Classical music considers time spans carefully already – here I am thinking of the large-scale symphonic form, the Ring Cycle, or even John Cage’s As Slow As Possible which is currently undergoing a 639-year performance in Halberstadt.

I considered both examples in the composition of Silicon. I was interested in what constitutes authentic and inauthentic music throughout the piece, and in many ways the musical arguments in each movement revolve around this question. The music also exists on several attention spans, from “scrolling” music frenetically jumping between tiny sound worlds to long periods of quiet stillness.

None of this work could be done by one person alone, and I am hugely grateful to all my technical partners over the last four years. These include PRiSM, OpenAI, Google, Oxford University, and the Alan Turing Institute. Collaboration between artists and scientists is one way that the two fields can positively engage with one another. Since this technology is not controlled or developed by musicians but will affect us in many ways, I believe collaboration is just as important as adaptation to ensure classical music continues to thrive.

In this article I’ve tried to articulate just some examples of the potential benefits and dangers of AI relating to classical music, and how they have influenced my work. While I don’t believe that every artist or organisation needs to radically reshape their practice, I do believe this technology and the audiences it creates need to be taken seriously. In doing so, classical music can position itself as an important and engaging artform for the current day, and this does not need to come at the total expense of existing audiences and established norms. I am left with many more questions than answers at this stage, but I am sure that this technology will profoundly change the way we create, listen to, and find meaning in, music.

Performance video of Silicon

Testing Triggering AI Sounds

New Scientist Interview and Feature

bottom of page