Here's how a war between AI and humanity would actually end

There’s no need to worry about a robot uprising. We can always just pull the plug, right…? RIGHT?

Try 6 issues for £9.99 when you subscribe to BBC Science Focus Magazine!

Photo credit: Joe Waldron

Published: September 29, 2023 at 6:00 am

New science-fiction movie The Creator imagines a future in which humanity is at war with artificial intelligence (AI). Hardly a novel concept for sci-fi, but the key difference here – as opposed to, say, The Terminator – is that it arrives at a time when the prospect is starting to feel more like science fact than fiction.

The last few months, for instance, have seen numerous warnings about the ‘existential threat’ posed by AI. For not only could it one day write this column better than I can (unlikely, I’m sure you’ll agree), but it could also lead to frightening developments in warfare – developments that could spiral out of control.

The most obvious concern is a future in which AI is used to autonomously operate weaponry in place of humans. Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence, and vice president of the Center for a New American Security, cites the recent example of DARPA’s (the Defense Advanced Research Projects Agency) AlphaDogfight challenge – an aerial simulator that pitted a human pilot against an AI.

“Not only did the AI crush the pilot 15 to zero,” says Scharre, “but it made moves that humans can’t make; specifically, very high-precision, split-second gunshots.”

Yet the prospect of giving AI the power to make life or death decisions raises uncomfortable questions. For instance, what would happen if an AI made a mistake and accidentally killed a civilian? “That would be a war crime,” says Scharre. “And the difficulty is that there might not be anyone to hold accountable.”

Read more:

In the near future, however, the most likely use of AI in warfare will be in tactics and analysis. “AI can help process information better and make militaries more efficient,” says Scharre.

“I think militaries are going to feel compelled to turn over more and more decision-making to AI, because the military is a ruthlessly competitive environment.

If there’s an advantage to be gained, and your adversary takes it and you don’t, you’re at a huge disadvantage.” This, says Scharre, could lead to an AI arms race, akin to the one for nuclear weapons.

“Some Chinese scholars have hypothesised about a singularity on the battlefield,” he says. “[That’s the] point when the pace of AI-driven decision-making eclipses the speed of a human’s ability to understand and humans effectively have to turn over the keys to autonomous systems to make decisions on the battlefield.”

Of course, in such a scenario, it doesn’t feel impossible for us to lose control of that AI – or even for it to turn against us. Hence why it’s US policy that humans are always in the loop regarding any decision to use nuclear weapons.

“But we haven’t seen anything similar from countries like Russia and China,” says Scharre. “So, it’s an area where there’s valid concern.” If the worst was to happen, and an AI did declare war, Scharre is not optimistic about our chances.

“I mean, could chimpanzees win a war against humans?” he says, laughing. “Top chessplaying AIs aren’t just as good as grandmasters; the top grandmasters can’t remotely compete with them. And that happened pretty quickly.

It’s only five years ago that that wasn’t the case. “We’re building increasingly powerful AI systems that we don’t understand and can’t control, and are deploying them in the real world. I think if we’re actually able to build machines that are smarter than us, then we’ll have a lot of problems.”


About our expert, Paul Scharre

Scharre is the Executive Vice President and Director of studies at the Center for a New Amercian Security (CNAS). He has written multiple books on the topic of artificial intelligence and warfare and was named one of the 100 most influential people in AI in 2023 by TIME magazine.


Read more: