AI is about to shake up music forever – but not in the way you think

As machine-learning music specialist Prof Nick Bryan-Kinns explains, new neural networks are capable of writing original music – but may never compose meaningful lyrics.

Published: June 12, 2021 at 11:00 pm

Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and it’s coming for your jobs.

That’s, at least, what you might think after considering the ever-growing sophistication of AI-generated music.

While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.

Some have even used AI to create ‘new’ music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.

Even stranger, this July, countries across the world will even compete in the second annual ‘AI Song Contest’, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case you’re wondering, the UK scooped more than nul points in 2020, finishing in a respectable 6th place).

But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon “make musicians obsolete?

To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he explains how AI music is composed, why this technology won’t crush humanity creativity – and how robots could soon become part of live performances.

How easy is it to create AI music?

Music AIs use neural networks that are really large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns – just like how the human brain does by repeatedly being shown things.

What's tricky about today’s neural networks is they're getting bigger and bigger. And they're becoming harder and harder for humans to understand what they're actually doing.

We’re getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we don't really understand the details of what it's doing.

These neural networks also consume a lot of energy. If you're trying to train AI to analyse the last 20 years of pop music, for instance, you're chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, we’re going to have to question whether the environmental impact is worth this new music.

Could an AI in future ever develop music completely by itself?

I'm a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.

There's a little bit of smoke and mirrors going on with AI music at the moment. You can throw in Amy Winehouse’s back catalogue into an AI and a load of music will come out. But somebody has to go and edit that. They have to decide which parts they like and which parts the AI needs to work on a bit more.

The problem is that we're trying to train the AI to make music that we like, but we're not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.

Will an AI ever create lyrics that are emotionally meaningful to humans?

I'm also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on people's life experiences, what's happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs don’t do that.

I'm a little bit sceptical that an AI would have that life experience to be able to communicate something meaningful to people.

Read more:

Could AI’s greatest contribution to music be creating new genres?

This is where I think the big shift will be – mash-ups between different kinds of musical styles. There’s research at the moment that takes the content of one kind of music and putting it in the style of another kind of music, exploring maybe three or four different genres at once.

While it’s difficult to try these mash-ups in a studio with real musicians, an AI can easily try a million different combinations of genres.

Could AI eventually put human musicians out of the job?

People say this with every introduction of new technology into music. With the invention of the gramophone, for example, everybody was worried, saying it would be terrible and the end of music. But of course, it wasn't. It was just a different way of consuming music.

AI might allow more people to make music because it's now much easier to make a professional sounding single using just even your phone than it was 10 or 20 years ago.

A girl experiences ZTE 5G AI music conductor during the Light of the Internet Expo as part of the 2020 Internet Conference on November 22, 2020 in Wuzhen, Zhejiang Province of China. © Getty
A woman interacts with an AI music conductor during the 2020 Internet Conference in Wuzhen, Zhejiang Province of China. © Getty

At the moment, AI is like a tool. But in the near future, it could be more of a co-creator. Maybe it could help you out by suggesting some basslines, or give you some ideas for different lyrics that you might want to use based on the genres that you like.

I think the co-creation between the AI and the human – as equal creative partners – will be the really valuable part of this.

How good is AI at replicating human singing?

AI can create a pretty convincing human voice simulation these days. But the real question is why you would want it to sound like a human anyway. Why shouldn’t the AI sound like an AI, whatever that is? That’s what’s really interesting to me.

I think we're way too fixated on getting the machines to sound like humans. It would be much more interesting to explore how it would make its own voice if it had the choice.

What other future technology could transform the music industry?

I love musical robots. A robot that can play music has been a dream for so many for over a century. And in the last maybe five or 10 years, it's really started to come together where you've got the AI that can respond in real-time and you've got robots that can actually move in very sort of human and emotional ways.

The fun thing is not just the music that they're making, but it's the gestures that go with the music. They can nod their heads or tap their feet to the beat. People are now building robots that you can play with in real-time in a sort of band like situation.

What’s really interesting to me is that this combination of technology has come together where we can really feel like it's a real living thing that we're playing music with.

In future, could solo performers tour around the globe with a band of robots?

Yeah, for sure. I think that'd be great! It will be interesting to see what an audience makes of it. At the moment it's quite fun to play as a musician with a robot. But is it really fun watching robots perform? Maybe it is. Just look at Daft Punk!

About our expert, Prof Nick Bryan-Kinns

Nick Bryan-Kinns is director of the Media and Arts Technology Centre at Queen Mary University of London, and professor of Interaction Design. He is also a co-investigator at the UKRI Centre for Doctoral Training in AI for Music, and a senior member of the Association for Computing Machinery.

Read more about the science of music: