Science Focus - the home of BBC Science Focus Magazine
It's time to accept AI will never think like a human – and that's okay
© Getty Images

It's time to accept AI will never think like a human – and that's okay

Published: 06th May, 2022 at 18:00
Subscribe to BBC Science Focus magazine

It’s no point getting frustrated with AI when it doesn’t do what we expect it to. Instead, we should focus on the ways it can help and support people.

Since the start of the pandemic, artificial intelligence (AI) developers have deployed hundreds of machine learning tools to help diagnose COVID-19. The promise: to find patterns in the medical data like an algorithmic version of the television character Dr House.

Advertisement

Recently, researchers have discovered that these AI tools were overhyped. Instead of discovering relevant connections between cases, the algorithms were making a litany of false assumptions, including predicting COVID cases based on the text font that hospitals happened to use in their documents.

This does not mean that machine learning is useless. It means that we need to better understand the strengths and limitations of AI, and, like we’ve done with animals, embrace that people think differently.

To a human, it’s obvious that a text font is not a good predictor of infectious diseases. But to a machine, that’s not obvious at all. AI may be able to use informational input to make predictions, but it’s not aware of what it’s doing. It doesn’t understand concepts or context and is easily thrown off by biased or mislabelled data that wouldn’t fool a four-year-old.

As machine learning expert Janelle Shane explains in her AI weirdness book You Look Like A Thing And I Love You, the mistakes machines make feel absurd to us because they don’t perceive the world like we do.

Unlike AI, human intelligence is extremely generalisable and adaptive. We’re flexible thinkers, understand broad concepts, and we can contextualise unexpected results or situations. And yet, a Google image search for ‘artificial intelligence’ in 2022 returns mostly pictures of human brains.

It’s not just our stock photo images: we use our own intelligence as a model when talking about AI, whether in casual conversation, science fiction thrillers, or in our news headlines. In part, this is because the AI pioneers originally did set out to understand and recreate human intelligence. So far, they haven’t succeeded.

It’s not that technology isn’t smart or getting smarter. Given the right data, training and circumstances, machines are great at computation, predictions and recognising patterns. My phone can do calculus and parse voice commands (at least most of the time).

Newer deep-learning methods can leave human ability in the dust. In 2016, when an AI system named AlphaGo beat the best Go player in the world, it made a move that astonished the experts: a move no human player would ever have thought to try. So rather than viewing AI as a less-developed version of ourselves, maybe it’s time to embrace our differences.

Roboticist Rodney Brooks once wrote, “It is unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.” Animals are a more useful comparison to AI, because they, too, perceive and engage with the world differently from humans. They sense things we can’t, and are totally oblivious to things that are obvious to us.

That’s why, throughout history, we’ve relied on animals to help us do things we couldn’t do alone. We domesticated beasts of burden to help plough our fields, and carry people and economic goods to new places. We’ve used canaries in coal mines, created pigeon postal services, trained ferrets to run electrical wire through pipes, and taught dolphins to recover lost underwater equipment.

You wouldn’t trust a dog to give you a medical diagnosis or relationship advice, but you might trust it to sniff out explosives, assist the blind, or provide therapeutic comfort. Similarly, AI may be lousy at appreciating your jokes or responding in an unexpected situation, but it can navigate traffic, detect safety hazards in nuclear plants, and collect data on Mars.

Robots like the PARO, a snuggly medical device that looks and moves like a baby harp seal, are even surprisingly effective in therapy when using real dogs isn’t feasible. The point isn’t that AI should replace dogs. The point is that the animal thought exercise lets us set aside the human comparison and imagine what AI can help us with that we can’t do alone.

Understanding the strengths and limitations of AI is key to avoiding the types of harmful mistakes we’re seeing today. The idea that we’re dealing with a different kind of intelligence inspires us to leverage this technology to support people – rather than replacing them. It encourages us to invent new practices and find new solutions – rather than recreating what we already have. And it prompts us to think more creatively and inclusively about how to situate AI in our infrastructure, workplaces and personal lives.

The best possible future isn’t one in which our technology thinks or acts like a human. It’s one in which we’ve envisioned a better world, and partnered with technology to create it.

Read more about AI:

Advertisement

Authors

Dr Kate Darling is a Research Specialist at the MIT Media Lab and author of The New Breed. Her interest is in how technology intersects with society.

Advertisement
Advertisement
Advertisement

Sponsored content