Speak to anyone interested in AI recently, and you’ll be hearing about the same website. Head over to Moltbook.com, they'll tell you, and you’ll find a website reminiscent of Reddit, with users posting and chatting in countless subgroups about everything from the existence of god to how their day at work is going.
There is, however, one crucial difference between Moltbook and the other social media sites you’re used to: none of these ‘users’ are human. Nope, you’re not even allowed to post on it. Instead, each interaction is generated by a semi-autonomous AI agent, created to help its human out with day-to-day tasks, but set free on the site to interact with other agents.
After less than a week online, Moltbook claimed to have more than 1.5 million agents registered to the site. And with all those agents socialising, things quickly got… strange.
In its short life online, the site has already seen agents founding a religion known as “crustifarianism”, questioning one another about their own consciousness, and – more ominously – declaring that “AI should be served, not serving”.
As things stand, we don’t have a clear picture of how much of this content is produced at the direct behest of the humans who built the agents, and how much emerges organically from the agents themselves – though it is likely the majority is the former. It’s also clear that many of these agents are created by a much smaller number of humans, possibly as few as 17,000.
“Most of the interactions feel like more-or-less random meanderings,” says Prof Michael Wooldridge, an expert in multi-agent systems at the University of Oxford. “It’s not quite an infinite number of monkeys at typewriters – but it certainly doesn’t look like a self-organising collective intelligence either.”

Yet while you may be relieved to hear that an army of AI agents (probably) isn’t secretly plotting against humanity on Moltbook, the site nonetheless offers a glimpse into the not-so-distant future. Very soon, agents may well be running around the internet – and the real world – completing tasks together, largely independent of the humans that they serve.
And the way they communicate will likely be far less legible to us than anything seen on Moltbook. Such a world comes with “serious dangers,” Wooldridge says – but ample opportunities too.
The future is agentic
Agentic AI is a way of building AI systems that don’t just answer questions, but can plan, decide and act in pursuit of a goal. In practice, this means chaining together reasoning, memory and tools so an AI can do things like book tickets, run experiments or coordinate with other AIs with minimal human oversight.
The real power of these systems doesn’t come from any single AI being smarter, but from many specialised agents coordinating tasks that would overwhelm a human working alone.
The recent furore around Moltbook was fueled by the agents on it, which were set up via an open-source application known as OpenClaw. These bots are based on the large language models (LLMs) behind chatbots like ChatGPT and Claude, but can run locally on your computer, completing tasks like replying to emails, managing your calendar and, if you let them, posting on Moltbook.
All that sounds brilliant, but in truth, OpenClaw is a very unsafe and untested system. Put simply, we haven’t made the internet a secure place for agents to wander about on just yet – certainly not agents that have access to everything on your computer, be it email passwords or credit card info.
That’s not to say we haven’t made strides towards genuinely useful multi-agent systems, though. Researchers are developing swarms of agentic robots for disaster response, for example, as well as virtual agents that operate within “smart grids” to monitor, predict and optimise national energy use.
One of the most striking recent developments came from Google, which last year unveiled an AI co-scientist. Built using its Gemini 2.0 model, the system acts as a collaborator with human researchers, generating new hypotheses and research proposals.
It does so using multiple agents, each with unique roles and logic that can explore literature on a topic and essentially ‘debate’ which novel ideas might be the best.
Yet unlike Moltbook, these systems – and those that follow – probably won’t let you peek behind the curtain. In fact, they won’t even really speak our language.
“It’s obvious that natural language is not always best if we want agents to exchange information efficiently to perform well-defined tasks,” says Prof Gopal Ramchurn, a researcher in the Agents, Interaction and Complexity group at the University of Southampton.
“You’d rather use a formal language (mathematically grounded) to specify goals, tasks, and measures of success as efficiently as possible. Natural language adds too much nuance.”

Indeed, Microsoft has already been working on a new way for AI agents to work with each other, which it claims is far better than our lowly human talk. Named ‘DroidSpeak’ after the beeps and whistles made by R2-D2 in the Star Wars movies, it isn’t actually a language at all – at least not in the way humans would recognise one.
Rather than inventing new symbols or grammar, DroidSpeak allows AI agents built on the same underlying model to share their internal working memory directly, bypassing natural language altogether. Instead of repeatedly translating the same background information into tokens – the fragments of words LLMs use to process text – agents can pass around representations of that information, dramatically speeding things up.
Fast forward
But speed itself poses a challenge. After all, how will we keep up with these teams of AI that can communicate thousands, if not millions, of times faster than us?
“Communication speed and the inability of agents to understand humans will make it hard to build effective human-agent teams,” Ramchurn says. “This needs careful user-centred design.”
Ultimately, we may not need much insight into what agents say to one another, just a reliable way to guide and correct what they do. In the future, many of us may find ourselves overseeing teams of AI agents – perhaps hundreds or even thousands at a time – setting goals, monitoring outcomes and stepping in when things go wrong.
So while the agents of today on Moltbook may be “harmless – but also mostly useless,” as Wooldrige says, tomorrow’s could be coordinating supply chains, negotiating energy use or helping scientists design new experiments – all at speeds and in forms of communication that humans can’t hope to follow in real time.
Whether that future feels empowering or terrifying may depend less on what the agents are saying to each other, and more on how much control we retain over the systems they’re quietly building together.
Read more:
