There are few events in technology that can be classed as true 'Big Bang' moments: times when our understanding of the world, and tech's place in it, shifts.
The advent of the World Wide Web was one such 'before and after' moment. The release of the iPhone in 2007 was another, bringing about the smartphone revolution.
The November 2022 release of ChatGPT was a similarly seismic shift. Before that, artificial intelligence (AI) was something few people outside the tech world really knew or cared about.
But the large language model (LLM) quickly became the fastest-growing app in history and kickstarted what we now call the ‘generative AI revolution’.
Revolutions can’t always sustain the same momentum, however.
Three years on from the release of ChatGPT and despite the harrowing headlines about mass job displacement at the hands of AI, many of us still remain employed, and, reportedly, more than half of Brits have still never used an AI chatbot.
Whether the revolution has stalled is debatable, but even AI’s keenest disciples suggest things aren’t moving as quickly as expected. So, is AI as intelligent as it’s ever going to get?
What is intelligence anyway?
The question of whether AI’s intelligence has plateaued depends on your definition of the word ‘intelligent’, reckons Prof Catherine Flick, professor of AI ethics at Staffordshire University.
“In my opinion, AI is not actually intelligent at all, but a programmatic ability to respond to human questions with intelligent-seeming responses,” she says.
For her, the answer to whether AI has become as intelligent as it ever will is yes – because it never was and never can be.
“All that can happen is that we could get better at programming these tools to return an ever-more-deceptive simulacrum of intelligence. But the underlying ability to think, experience and reflect will forever be off-limits to artificial agents,” she says.
In part, some of the disappointment around AI comes from a group of AI advocates who suggested AI could do anything a human could do – and do it better – from the moment it was unleashed on the world.
That group included the AI companies themselves and their leaders. Dario Amodei, CEO of Anthropic, the makers of the Claude chatbot, is one of the loudest backers.

He recently suggested AI could develop beyond the limits of human intelligence within three years – though he’s previously made similarly bullish predictions that ended up being wrong.
Flick recognises that ‘intelligence’ when it comes to AI means different things to different people. And if the question is really ‘Will AI models like ChatGPT and Claude get even better?’, her answer changes.
“[They’ll probably] get better as other methods are found that can more accurately simulate [a human-style interaction], but they’re never going to make that magical step from being a fancy statistical weighting processor of data to actual experiential, thoughtful, reflective intelligence.”
Nevertheless, the debate about AI models starting to return less and less powerful improvements is a lively one in the AI industry.
OpenAI’s highly anticipated GPT-5 model turned out to be a damp squib – largely because the company tried to present it as something superhuman in its pre-release marketing
So, when the slightly more capable model was released, people saw it as underwhelming. For AI naysayers, that’s a signal we’ve already reached a ceiling. But are they correct?
Read more:
- How AI could soon be used by Wikipedia, according to its founder
- These are the traits most likely to make you a killer, according to police AI
- Could scientists upload an animal brain to a computer?
Two-track system
“The perception that AI’s progress is plateauing is actually an illusion, shaped by the fact that most people encounter it only through consumer-facing applications like chatbots,” says Eleanor Watson, an AI ethics engineer and faculty member at Singularity University – an education company and research facility.
Even then, those chatbots are improving, but often in ways that feel incremental, says Watson. “Like a car getting a sleeker paint job or a better GPS each year,” she explains.
“What this view misses are the revolutionary changes happening under the hood. In reality, the engine has been fundamentally redesigned and is accelerating at an exponential pace.”
While AI chatbots might operate largely the same as they did three years ago for the average user who doesn’t dig into the details, AI is being used in a range of applications that it wasn’t before, and successfully so. Medicine, for example.
That pace is likely to continue, she reckons, for a number of reasons. One is the sheer might of energy being put behind the generative AI revolution.
According to the International Energy Agency, by 2030 the electricity demand to power AI systems will be greater than what’s used for manufacturing steel, cement, chemicals and every other energy-intensive good combined.

Tech companies are spending vast sums on data centres to process our AI queries.
In 2021, the year before ChatGPT’s release, four of the biggest tech companies – Alphabet (Google’s parent company), Amazon, Microsoft and Meta (owners of Facebook) – spent just over $100bn (£73bn) on everything needed to house and operate these data centres.
In 2025, it’s closer to $350bn (£256bn), and expected to exceed $500bn (£366bn) by 2029.
Alongside building bigger data centres with stronger, more reliable sources of electricity to power the models, AI companies are getting smarter with how those models operate.
“The brute-force method of adding more data and computing power still yields surprising gains, but the bigger story is efficiency,” says Watson.
“Models are becoming vastly more capable. A task that once demanded a sprawling behemoth can now be achieved by a system a fraction of the size, that’s cheaper and faster, with capability density rising at an astonishing rate.”
Techniques such as rounding numbers, or quantising inputs to LLMs – meaning you reduce how precise the information is in areas that are less important – can all make models more efficient.
Get me an agent
One area of ‘intelligence’ – if we’re defining it as ‘efficiency’ – that AI still has room to grow into is in ‘agentic’ AI use.
This involves shifting what an AI does and how we interact with it, and is still in its early stages. “An agentic AI can manage your finances, anticipate needs and devise sub-goals toward a larger objective,” explains Watson.
All the major AI companies, including OpenAI, are integrating agentic AI tools into their systems, which would turn the use of the technology from simple chats into AI-powered co-workers who can independently get on with tasks while you focus on something else.
Increasingly, these AI agents are able to labour away independently for hours at a time – something that most people would argue signals the advancement of AI intelligence.
Yet AI agents come with their own challenges.
Researchers have already identified issues with agentic AI, which can be hoodwinked into carrying out nefarious instructions through so-called ‘prompt injection’ attacks, where they’re told to carry out commands that could be harmful when they encounter instructions on a website the AI agent believes to be innocuous.
For that reason, many companies are keeping tight leashes on these AI agents.
But just the notion that AI can be sent away to do tasks on autopilot suggests that there’s room to grow. That, alongside the investments in computing power and the continual turnover of AI products, suggests that AI isn’t stalling. Far from it.
“The smart bet is on continued, exponential growth,” says Watson. “The [tech] moguls are right about the trajectory, but they tend to underplay the governance and safety challenge that must scale alongside it.”
Read more:
