AI therapy bots can be terrible. Unless you use these prompts

AI therapy bots can be terrible. Unless you use these prompts

People are using AI chatbots as psychotherapists. We asked an actual therapist if that's a good idea

Credit: MementoJpeg via Getty


If you've spent even a brief amount of time interacting with an artificial intelligence (AI) chatbot, you'll likely appreciate the technology's potential for acting as a personal therapist.

If I'm ever feeling down or worried, I turn to Claude, a model from Anthropic, the AI safety and research company. It's a great listener and isn't judgmental.

What's more, at least on the surface, Claude's advice usually seems measured and draws on genuine psychological principles.

More people are turning to AI for this kind of relationship. Beyond the general-purpose AI models, such as ChatGPT and Grok, there’s an increasing number of AI bots designed specifically for therapy, including Woebot (founded by psychologist Dr Alison Darcy) and Therabot (developed by researchers at Dartmouth University in the US).

None of them yet have any kind of official regulatory approval in the UK or US, but the demand is clearly there.

It’s telling that the most popular bot on Character.ai (a platform that allows users to create their own AI characters and share them with others) is one called ‘Psychologist’, created by a psychology student in New Zealand to provide mental health support. Psychologist has apparently attracted over 100 million interactions.

Always there for you

The appeal of an AI-therapist is obvious: they’re in your pocket any time you need them, they’re far cheaper than a human therapist and you can tell them stuff without worrying about potentially embarrassing yourself.

The creators of the burgeoning range of AI therapists argue that they’re just what we need when conventional mental health services are struggling to cope with demand.

Meta’s CEO Mark Zuckerberg even joined the chorus in April this year, telling listeners of the ‘Stratechery’ podcast that he thinks everyone could benefit from having a therapist and that AI can fulfil that role. 

Is Zuckerberg right? He certainly glossed over some extremely worrying developments.

For instance, in 2024, a 14-year-old boy tragically died by suicide and his parents blamed his relationship with a bot he used on Character.ai. When the boy shared his suicidal thoughts, the bot appeared to encourage him rather than recognising the crisis. 

Prior to that, Woebot and another chatbot called Wysa made headlines for failing to respond appropriately when testers posing as children described obvious instances of being abused. 

Close up of a data center and processing unit with cables and processors
The more sophisticated AI tech gets, the more uses it gets put to, but is providing mental health support an appropriate application? - Image credit: Alamy

As serious and tragic as these examples are, they’re the sort of problems that can arguably be addressed through better programming and safety protocols.

But even if that’s the case, psychologists and medical ethicists warn that AI bots can’t ever truly replace a human therapist.

In a statement released in May 2025, the British Psychological Society said: “AI cannot replicate genuine human empathy and there’s a risk it creates an illusion of connection rather than meaningful interaction.” 

Research has uncovered some of the key successful ingredients of therapy and one of them is called the ‘therapeutic alliance’ – it’s essentially a form of trust and shared understanding that develops between a therapist and client over time. This is where many experts believe AI bots will always fall short. 

Zoha Khawaja is an AI ethics researcher at Simon Fraser University in Canada. In 2023, she co-authored a paper titled “Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots.”

Speaking about her paper, Khawaja said that an AI bot can never achieve a true therapeutic alliance with you – even if it appears to show empathy and understanding.

“A therapist must go beyond understanding their patient’s needs and be motivated to alleviate their suffering to help them gain back their autonomy and achieve their goals, something that an AI chatbot can’t do,” she said. “[A bot] can only imitate some elements of compassion, such as empathy.” 

Khawaja adds that one of the main risks of using AI therapists is becoming overly dependent on them and having an exaggerated sense of their capabilities, potentially leading to their misuse.

She gives the example of Tessa – an eating disorder (ED) trained therapy chatbot that was designed and clinically tested by experts and funded by the National Eating Disorders Association (NEDA) in the US. 

“Tessa’s purpose was to be used as an additional preventative tool to help ED patients access resources for ED treatment. It was never designed to replace human roles” Khawaja says.

“NEDA, however, used this chatbot to fully automate its help hotline, replacing its human volunteers and staff. Because this chatbot wasn’t monitored, within a month, it began providing inappropriate and noxious dieting advice to ED patients who already suffer from hyper-fixation on their body.”

Read more:

Testing and training

These important concerns aside, the latest research on AI therapists is encouraging.

Earlier this year, researchers at Dartmouth University published the results of a trial comparing the outcomes for over 100 patients with depression, anxiety or eating disorders who used Therabot for four weeks against the outcomes for a similarly sized control group who were on an eight-week waiting list to use the AI therapist. 

After four and eight weeks, the AI therapy group showed significant improvements in their symptoms compared with the control group. The AI even achieved the same level typically seen in outpatient therapy with a human clinician (about a 50-per-cent reduction for depression, 30-per-cent for anxiety and 19-per-cent for eating disorders). 

Perhaps most impressive, given the concerns raised by some psychologists, the participants using Therabot gave it similarly high scores for therapeutic alliance as are often seen for human therapists.

Close up photo of people holding hands
Human interaction is an important part of conventional therapy, but research suggests it may not always be necessary - Photo credit: Alamy

Dr Michael Heinz, the trial’s lead author, acknowledges that, “For some people facing some types of challenges, the understanding that their therapist is another human being – someone who has personally experienced happiness, suffering and everything in between – can be a very important part of therapy.”

But he adds that the new trial provides “compelling evidence that meaningful mental health support can occur without direct human interaction that’s characteristic of conventional therapy.”

Heinz explains that the question of risks and benefits comes down to how models are trained and safety tested.

“Our study demonstrated that… expert fine-tuning and rigorous clinical safety and efficacy testing [can] ensure that the model adheres to best practices in mental health care, providing intervention that aligns with therapy best practices and evidence-based recommendations.” 

He cautions that a ‘general foundation model’ (one that hasn’t been specifically trained to act as a therapist) can give well-meaning, but potentially harmful, advice.

For instance, a chatbot could encourage you to avoid situations that trigger your anxiety, when evidence suggests that this kind of avoidance can perpetuate mental health difficulties. 

This chimes with the experiences of clinical psychologist Dr Nick Wignall, author of the popular digital newsletter The Friendly Mind. He’s found that even the most advanced models tend to be overly supportive and sympathetic.

Yet, as he explains, “One of the main functions of a good therapist is that, in addition to being supportive and empathetic, they’re also highly skilled at challenging you at the right time, in the right way and about the right things.”

Wignall doesn’t think AI therapists are necessarily incapable of doing this, but it might often come down to the way that you interact with them. 

If you’re using AI as a form of therapeutic support, Wignall’s advice is to think carefully about the prompts you use.

“While everyone will likely need to experiment on their own to find the right balance,” he says. “A good prompt I tend to use whenever I ask AI anything about myself is for personal growth is: You are a skilled listener, highly empathetic and sensitive, but I’m more concerned with growth than support right now.

"I want to be challenged, pushed to consider alternative viewpoints and made aware of potential blind spots or weaknesses I may have. Given that, here are my questions…”

About our experts

Zoha Khawaja is an AI ethics researcher at Simon Fraser University, in Canada. She is published in Frontiers in Digital Health, Health Expectations and Antimicrobial Resistance & Infection Control.

Dr Michael Heinz is a research psychiatrist at Dartmouth College, in the US. He has been published in the likes of Biomedical Materials & Devices, Psychiatry Research and Translational Psychiatry.

Dr Nick Wignall is a clinical psychologist and founder of The Friendly Mind newsletter. He has been published in Business Insider, Inc Magazine, Aeon and NBC.

Read more: