The hidden genius of Blob Opera – and how it could get even smarter

The complex machine learning behind Blob Opera hasn’t taken us to peak internet just yet – existing music AI could make things even better

Published: December 18, 2020 at 12:25 pm

This year wasn’t all bad. Really. Between the pandemic headlines, the last 12 months have seen huge leaps in space travel, quantum imaging and, of course, medicine. But now we’ve all been unexpectedly gifted 2020's crowning achievement: Blob Opera.

Developed by Google and AI artist David Li, the machine learning experiment allows users to compose their own operatic renditions through a quartet of colourful singing blobs. And all within a standard internet browser.

Simply dragging a blob up and down can change its pitch, while moving it side-to-side produces a range of vowel sounds. Simultaneously, the other blobs will also harmonise with this voice, responding in real-time to any new directions. At the flick of a switch, users can even see the blobs recreate Christmas classics such as The First Noel or Joy to the World.

It is – and we cannot stress this enough – a whole lot of flubbery fun.

But although effortless to use, it was no easy feat to build. Modelled on 16 hours of singing from four professionals – Cristian Joel (tenor), Frederick Tong (bass), Joanna Gamble (mezzo‑soprano), and Olivia Doutney (soprano) – Blob Opera relies on AI to synthesise the noises you hear.

As Google says: “In the experiment, you don’t hear their voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them.”

In short, Blob Opera is very very smart. But it might not be immediately clear just how smart. Partly, this is because Google hasn’t publicly unveiled the code underpinning the experiment. But it’s also because Blob Opera probably relies on two AI networks working so well you might not notice they’re there.

Read more:

The first: the AI harmonising the blobs. “The Google code hasn’t been released, but most such computer harmonisers rely on a large dataset of existing harmonies,” says Dr Rebecca Fiebrink from the Creative Computing Institute at University of the Arts London.

“You then use a machine learning algorithm that can find patterns in that data. From that, the programme can work out that if a blob sings a certain note, what might be a nice set of next notes for the other blobs.”

The second music algorithm likely at work? Something much more complicated: an AI that makes it sound natural as the blobs move between notes.

“It’s basically like ‘inbetweening’ in cartoon animation,” explains Prof Nick Bryan-Kinns from Queen Mary University of London. “That’s when you have two frames and you get a computer to create an intermediate frame resulting in an overall smoother animation.

"Blob Opera is essentially doing that with sound. So, instead of getting opera singers to sing a million different notes, they’re getting a computer to work out how to make a smooth transition.”

Fiebrink adds: “Until a few years ago we weren’t able to use machine learning to synthesise really convincing operatic singing voices. That’s because, as humans, we're especially tuned to human voices. And if something sounds a little bit off, we notice it immediately. But Google and DeepMind have made some massive strides in recent years.”

However, while doubtlessly intelligent and well-crafted, Blob Opera isn’t revolutionary. Similar AI is already used in select video games, with transitional music automatically created between two longer pre-recorded pieces. The autotuning correcting errors in many pop hits is founded on similar algorithms. In fact, the harmonising machine learning of Blob Opera was actually first developed in the 1980s.

As AI and musicologist experts say, there have been much more impressive developments in sound machine learning. And they could be easily applied to Blob Opera.

How existing AI could make Blob Opera even better

It seems an impossible question after enjoying a blobby rendition of O Come, All Ye Faithful, but we’re asking it anyway: how could Blob Opera be improved?

Turns out, in several intriguing ways. Firstly, you – yes, you – could voice one of the blobs.

“It would actually be pretty easy,” says Fiebrink. “You could run a really simple pitch tracker on your voice in real-time and use the harmonisation component from Blob Opera. Anytime you change pitch, the other blobs could follow. You could build that phone app easily right now."

“Also, instead of training the AI on opera singers, you could use barbershop quartets or other instruments. You could even have a blob K-pop group – because who doesn't want their own K-pop boy band harmony?”

Bryan-Kinns has another idea. “There’s also the possibility of creating genre mash-ups. You could easily pair this AI opera singing with music of other genres, like AI-created drum and bass loops. It could create new genres of music we’ve never heard before," he says.

“Sure, 90 per cent of it could sound absolutely terrible. But that other 10 per cent? It could sound absolutely amazing!”

This AI machine creates a music and light show from a user's hand movements (world Internet Conference in Wuzhen, East China's Zhejiang Province, November 2020) © Getty
This AI machine creates a music and light show based on a user's hand movements (World Internet Conference in Wuzhen, East China's Zhejiang Province, November 2020) © Getty

This might only be the start of the AI music revolution. Researchers like Dr Marcus Pearce Queen Mary University of London are attempting to take this technology much much further.

“My expertise is modelling the musical mind, trying to crack why we enjoy it in the first place – what are the brain mechanisms involved in enjoying music? And then, using this information and an AI, we might be able to generate music in real-time that a given person would find pleasurable at that precise moment," he says.

“It might not sound great for other people, but for that person, this music would be great."

But, as Pearce admits, such software is many years away. More research into behavioural and neuroimaging modelling is needed before the psychological mechanisms behind our enjoyment of music are better understood. But, if this data is collected, a music service in the future may be able to scan your brain and generate that perfect, life-changing song in an instant.

Currently, however, we’re left listening to four blobs belting out Jingle Bells. But, for the moment, that's all we could wish for.

About the experts in this piece

  • Dr Rebecca Fiebrink is a reader at the Creative Computing Institute at University of the Arts London. Her current research includes how machine learning can be used to design new digital musical instruments.
  • Nick Bryan-Kinns is a Professor of Interaction Design at Queen Mary University of London. His research explores interactive technologies for media and arts.
  • Dr Marcus Pearce is a senior lecturer at Queen Mary University of London. He teaches Music perception and cognition.

Read more about Artificial Intelligence: