This tiny worm’s brain could transform artificial intelligence. Here’s how

This tiny worm’s brain could transform artificial intelligence. Here’s how

‘Liquid neural networks’ promise smaller, smarter and more transparent AI – and they’re already running on devices from drones to self-driving cars

Credit: iStock / Getty Images Plus


Today’s artificial intelligence (AI) models are behemoths. They run on billions of parameters, trained on oceans of data, all hosted in energy-hungry server farms. 

But does it have to be this way? Apparently not. One of the most promising new contenders for the future of machine intelligence started with something much smaller: a microscopic worm. 

Inspired by Caenorhabditis elegans, a millimetre-long creature with just 302 neurons, researchers have created ‘liquid neural networks’ – a radically different kind of AI that can learn, adapt and reason while running on a single device. 

“I wanted to understand human intelligence,” Dr Ramin Hasani, co-founder and CEO of Liquid AI, a company at the forefront of this tiny revolution, told BBC Science Focus. “But when I started to look at what information we have available on the human brain, or even rat or monkey brains, I realised we have almost nothing.”

At the time, the animal with the most comprehensively mapped nervous system was C. elegans. So that’s where Hasani and his colleagues started.

Hasani’s fascination with C. elegans wasn’t about its behaviour, but its ‘neural dynamics’ – the way its cells communicate.

Neurons in the worm’s brain communicate through graded, analogue signals rather than the sharp electrical spikes more akin to digital signals found in larger animals. As nervous systems evolved and organisms grew bigger, spiking neurons became a more efficient way to send information over long distances.

Yet the roots of human neural computation still trace back to that analogue world.

For Hasani, this was a revelation. “Biology as a whole is a fascinating way to reduce the space of possibilities,” he said. “Billions of years of evolution have searched through all possible combinations of building efficient algorithms.”

Rather than copying the worm’s biology neuron by neuron, Hasani and his collaborators set out to capture its essence – flexibility, feedback and adaptability.

“We’re not doing biomimicry,” he said. “We’re trying to get inspired by physics and nature and neuroscience to get to a point where we can bring value to artificial neural networks.”

Read more:

What makes them ‘liquid’

Traditional neural networks, like the ones behind today’s chatbots and image generators, are pretty static. Once trained, their internal connections are fixed, making it hard to fundamentally change them with experience.

Liquid neural networks are different. “Liquid for flexibility,” Hasani said. “Liquid neural networks are systems that can stay adaptable when they do computation.” 

To explain, he used the example of a self-driving car. If the car is driving and it starts raining, it needs to keep driving despite the view (or input data) becoming noisy. So the system needs to adapt and be flexible enough to handle that change. 

Traditional neural networks process information in a strictly one-way, deterministic fashion: the same input will always produce the same output, and data flow moves in a single direction through the layers. This is an oversimplified picture, but you get the point. 

Liquid neural networks work differently. Their neurons can influence one another both forwards and backwards through the network, creating a more dynamic system. The result is a model that behaves probabilistically. Give it the same input twice, and it might respond slightly differently each time, just as biological systems do.

Caenorhabditis elegans, a free-living transparent nematode (roundworm), about 1 mm in length.
C. elegans is a tiny worm, about 1mm in length, that lives in damp, rich environments like soil, compost heaps and rotting vegetation - Credit: iStock / Getty Images Plus

“Classic networks receive an input, compute, and output a result,” Hasani said. “Liquid neural networks, they do the computation, but they also change the way they do the computation every time they receive a new input.”

The maths behind these networks was anything but simple. Early versions were slow, as they had to solve a series of complex equations step by step before producing an output.

That changed in 2022, when Hasani and his colleagues published a paper in Nature Machine Intelligence describing a shortcut: an approximate way to handle those equations without all the heavy computation. 

Overnight, liquid models were supercharged, running orders of magnitude faster while preserving the biological flexibility that traditional AI systems lack.

Smaller, greener, smarter

All that flexibility means liquid models can compress far more information into smaller systems.

“At the end of the day, what are AI systems? They basically take a big bulk of data and compress it into this algorithmic space,” Hasani said.

“If you only have static numbers to compress systems into, you can only do so much. But if you have flexible dynamics that can capture the essence of your data, you might actually be able to pack more intelligence inside the system.”

He called it a “liquid way of computation”. The result is models that are thousands of times smaller than today’s large language models, yet capable of matching – or outperforming – them on specific tasks.

Prof Peter Bentley, a computer scientist at University College London who studies biologically inspired computing, said this shift is crucial: “Right now, AI is dominated by massive power-hungry models that all follow a quite old idea of how networks of neurons can be simulated in a computer.

“Fewer neurons means smaller models, which means less compute. Which means less energy. And the ability to keep learning is important – something current large models really struggle with.”

In Hasani’s words: “You could literally mount one of our systems on a coffee machine.”

He added: “If you can host it on the smallest units of computation, you can host it anywhere. And that just creates an enormous space of possibilities.”

Businessman in white shirt holding smart glasses with digital information on virtual screen high tech interface.
Liquid models are compact enough to run directly on devices such as smart glasses or self-driving cars – no cloud connection required - Credit: iStock / Getty Images Plus

AI that fits in your pocket, or on your face

Liquid AI is already building these systems for the real world. One partner is developing smart glasses that run directly on a user’s device. Others are working on self-driving cars or language translators that run directly on phones. 

Hasani, who wears glasses all the time, pointed out that making glasses smart sounds great, but you wouldn’t want everything you see to be fed to a server to deliver your smart features (think about your toilet time). 

That’s where liquid networks come in. Because they can run on minimal hardware, they can process data locally, keeping it private and cutting energy use.

It also makes AI more independent. “Humans are completely independent of the human next to them,” Hasani explained. “But they can communicate with each other. I want the devices of the future to have that independence and at the same time have the ability to share information.” 

Hasani called this next step “physical AI” – intelligence that doesn’t just live on the cloud but interacts with the physical world. Harnessing this version of intelligence may finally allow for the robots of sci-fi fantasy to enter the real world without the need for a constant internet connection.

There are some drawbacks. Liquid systems only work on what’s known as ‘time series’ data. That means they can’t process static images (something traditional AI is quite good at), but need sequential data like videos.

For Bentley, this shouldn’t hold them back too much: “While time series data might sound a bit limiting, it’s really not – most real-world data has a time element or changes over time. There’s a lot of it about: video, audio, financial markets, robot sensors.”

Hasani also conceded that these systems won’t be the ones pioneering new scientific discoveries like finding fresh energy sources or novel medical treatments. Such work will remain in the domain of giant models.

But that’s not really what they’re designed for. Instead, the technology is promising to make AI more efficient, interpretable and human-like, all while fitting on whatever real-world device you need. And it all began with a tiny worm, quietly wriggling through the soil.

Read more:

This website is owned and published by Our Media Ltd. www.ourmedia.co.uk
© Our Media 2025