'I don’t think it’s that weird': Hannah Fry on getting uncomfortably close with AI

We speak to Prof Hannah Fry about the human impacts of artificial intelligence, from AI therapists to lovers, agents and tutors

Credit: BBC / Curious Films / Rory Langdon Down


Prof Hannah Fry’s new show, AI Confidential, sees her explore the biggest stories emerging from the front lines of the AI revolution.

The three-part documentary series, which begins on Monday 23 February on BBC Two, follows people using AI to imitate dead relatives, to drive their cars, as friends and lovers, and more. We caught up with her to find out more about it.

What are your thoughts on artificial romance?

I wouldn’t do it, but I don’t think it’s that weird. I think there’s a spectrum here. At one end, this is potentially interactive romantic fiction, which I don’t think sounds that bad at all.

Then, at the other end, you could fully believe that this AI you’re talking to is a human-like entity that you’re in love with, at the exclusion of all other relationships. There’s a very wide spectrum between those two. But at the tame end, I don’t think it’s that bad.

How close are we to a future where people replace their human relationships with AI?

I can’t say that it definitely will happen, but I also can’t say that it definitely won’t.

We’re already on a trajectory where people are increasingly isolated, where people aren’t socialising in the same way that they historically have – where people are working from home and not leaving the house.

I think that having your own personal friendship with an AI could be another step down this same path.

What scares you most about AI?

I think it has the potential to accelerate us away from what it is to be human and away from human connection. Compared to something like self-driving cars, there’s something much more quietly dangerous about subtly shifting every one of us away from our human relationships.

A lot of people are using AI for therapy and I think there’s a risk that it could reinforce our own beliefs in a way that pushes us further apart.

After making this series, I can see how easily that happens. If you have an argument with someone and then use AI as a therapist, you’re likely to end up believing only you were right.

Whereas, a human therapist would listen and be empathetic, but they might also say, “Have you considered this alternative perspective?”

AIs aren’t designed to be therapists. They’re not designed to have the hard bits of human relationships, like when you need to hear things that you don’t want to hear. So, it’s incredibly easy to end up self-radicalising [with an AI].

What I do now all the time when I chat to an AI is I write in: “You need to tell me when I’m wrong. You need to push back. I don’t want you to be sycophantic.”

Hannah Fry with Jacob Von Lier, holding a picture of his AI girlfriend
In the first episode of AI Confidential, Fry travels to the Netherlands to meet Jacob Von Lier and his AI girlfriend - Credit: BBC / Curious Films / Harriet Bird

Is this chatbot tech just filling a hole left by the loneliness epidemic?

It absolutely is. But the thing is, if you say, “No, this can’t happen. You can’t treat AIs as therapists; you can’t treat them as friends, as characters who you relate to in this way.” Well, then, what about all the people who are feeling lonely or vulnerable? Of course, in a fantasy world, there would be meaningful human relationships for everybody, but we don’t live in that world.

In AI Confidential, we meet Justin Harrison, who invented an AI that can recreate people’s voices, so we can speak to our loved ones after they die. What are the pros and cons of this grief tech?

There’s something that sits uncomfortably about the fact that it’s for people during one of the most vulnerable moments of their lives, and this capitalises on that.

I went into that conversation a bit prickly, if I’m honest, because I had read about what I consider quite outlandish things that he’d said about grief – that 'we don’t need to grieve' and, 'why don’t we just get rid of grief?' It sounded to me like somebody who was unable to accept death. I went into it with a real raised eyebrow, thinking it wasn’t a responsible thing.

But through the process of talking and thinking about it, in the context of my own dad, who I’d lost only a couple of months before at that point, I think it does feel like the extension of what people are already doing when they’re desperately yearning for somebody they love.

And I can imagine a situation in which that would help – maybe not as a permanent solution, but for the most acute stages of grief – especially if somebody dies very suddenly. I can imagine how that might be helpful.

Even though I didn’t set out that day to cry on camera, at the same time, I think it’s important to include those moments because this is such a human story. As far as possible, the AI revolution should be something that’s done with us, and not to us.

I think that moment helped me empathise way more with people who are really lonely. All of us have that desperate need for connection.

Read more:

What are the dangers of people having their own AI agents?

We’re right on the brink of people being able to have their own AI agents that do their bidding on the internet. For example, you could have an agent that goes and finds a holiday for you.

It could work out where you want to go, what date in your diary works for you, how much money you have, what kind of thing you like doing, talks to different travel agents, and books it all for you. We’re right on the brink of that being possible.

But if you have your agent interacting with a company’s agent, you don’t know that they’re going to act in your interest. There’s a real risk that something catastrophic could happen, because so much of our infrastructure, all over the world, relies on the internet.

What if someone sets up an agent and it shuts down the power to Spain? You can imagine worse and worse situations.

The one thing I would say is that the companies know this is a risk and they’re trying to design around it, to reduce the risk as much as possible.

Hannah Fry stands in a kitchen with Rafaela Vasquez, both looking into the camera
In the second episode of AI Confidential, Fry explores the safety and potential dangers of self-driving vehicles, and meets individuals affected by them, such as Rafaela Vasquez (above) - Credit: BBC / Curious Films / Harriet Bird

Are the bots going to take over the world?

Not if I have anything to do with it!

All of this could be really bad, but if we go about it in a responsible way, we can use AI for good.

Right now, AI is just a shortcut to getting where I want to be [with a certain task or skill]. I think a lot of people have that experience. For example, using AI like a tutor for subjects that I’m not that familiar with – or for programming – it’s not just making something I could have made faster. It’s extending my abilities.

But we could get AI to fix nuclear fusion. We could have unlimited, free, clean energy for the entire world. We could get it to work out how to strip salt from water so that you have unlimited clean water – or turn the Sahara back into a rainforest.

The entire human existence has been in a world where there’s scarcity, but we could have unlimited abundance. It could cure all cancers – all diseases.

I’m not saying this is going to happen in the next 10 minutes, but it’s on the table. This is as much a fantasy as AIs taking over.

What can we do to make sure AI becomes a force for good?

There are technical things that companies can do, like recognise during a conversation when somebody is showing signs of distress or addiction to the AI.

For us, I think we can worry about this. There’s power in the worry. We should be worried. Without it, nothing’s going to change.

This interview has been edited for length and clarity.

BBC Two logo Watch AI Confidential now on BBC Two or BBC iPlayer, from Monday 23 February

Read more:

Footer banner
This website is owned and published by Our Media Ltd. www.ourmedia.co.uk
© Our Media 2026