It starts like any other evening. But just as you slump onto the sofa and turn on the television, the front door explodes open and a dozen black-clad police swarm into the room.
You’re muscled to the ground, handcuffed and arrested at gunpoint. The charge? Murder. A murder that you’ve yet to commit.
This may sound like a scene straight out of the dystopian fiction Minority Report, where people are arrested based on clairvoyants’ visions of future crimes.
But this time, it’s a smart algorithm making the prediction, analysing your data to decide whether you have the potential to become a cold-blooded killer.
While there are still no algorithms that can predict that a murder will definitely take place, computer programs are already used across the world to identify people or places at high risk of criminal activity.
And according to several recent reports, the UK government is currently developing a murder prediction artificial intelligence (AI) tool to identify those at greatest risk of committing homicide.
But can this ‘Homicide Prediction Project’, or a human for that matter, ever accurately predict something as complex as whether a human will take the life of another?
Making a murderer
The science is currently far from perfect, but there are some well-known risk factors that an algorithm could use to assess whether someone might be capable of murder.
It’s little surprise that personality can play a role, with dark traits such as psychopathy linked to violent crime.

These traits are often combined into what scientists call the ‘dark triad’ of psychopathy, narcissism and Machiavellianism (being manipulative and exploitative to obtain power).
All these traits come with some level of hunger for power and control, selfishness, manipulation, deception, a lack of empathy and remorselessness.
We all have these traits to some extent. But clinical levels of narcissism and psychopathy are rare – estimated to affect 1-5 per cent and 1 per cent of the population respectively.
Importantly, clinical levels of psychopathy (sometimes referred to as antisocial personality disorder) are particularly common among criminals, including murderers, with an estimated 15-25 per cent of male prisoners in the US suffering from it.
“The evidence is pretty compelling,” says Dr Ava Green, a lecturer in forensic psychology at City St George’s, University of London.
“People with elevated dark traits don’t easily feel sorry for someone, they don’t feel guilty for what they’ve done and they don’t fear punishment or the consequences of their actions. Instead, they actually gain satisfaction from committing crimes.”
She adds that this is particularly the case for psychopathy, which has been most studied of the dark triad, often using an assessment manual called the Psychopathy Checklist.
“Clinicians and law enforcement professionals use it to predict how likely people are to commit violent crimes and, more importantly, reoffend,” she explains.
Although narcissism is less studied in criminal offenders compared to psychopathy, Green says research shows that people with high levels of narcissism can get very aggressive when they are challenged, criticised or face failure – a lashing out called ‘narcissistic rage’.
Motives such as a need for fame, even if it is notoriety, can be a factor in (for example) mass shootings.

“In less serious manifestations of narcissism, that rage can be expressed in passive-aggressive form; you could sulk, you could fantasise about the ways in which you could receive justice and maintain a sense of power and control,” explains Green.
“But in more serious cases, it could lead to actual violence.”
In fact, Clare Allely, a professor of forensic psychology at the University of Salford, has supposed that narcissistic rage may have driven Elliot Rodger to murder six people and injure 14 in Isla Vista, Santa Barbara, in 2014.
After his drive-by shootings, he eventually killed himself, leaving behind a ‘manifesto’ dubbed My Twisted World, in which he outlined his severe anger, frustration and unhappiness – particularly concerning being rejected by women.
Allely adds that research also shows that violent offenders are more likely to have a condition called alexithymia, which is a difficulty in identifying emotions and internal body states – even simple ones such as sadness or hunger.
“Imagine not being able to describe that you’re getting increasingly angry or adopt any strategy to reduce those feelings. You can’t feel that build-up until you actually explode and end up engaging in violent explosive behaviour,” she says.
There are other psychological traits also associated with murderers.
For instance, some 15 per cent of murderers are estimated to have a major mental disorder such as schizophrenia, paranoia or severe depression.
Substance abuse is another major risk factor, as are previous criminal convictions, social isolation and deprivation, Allely adds.
The idea is that psychiatric diagnoses, when combined with factors like criminal history and socioeconomic background, can be fed into algorithms to flag individuals at higher risk of committing murder.
It’s a concept that many would find not only downright disturbing and unethical, but also something that could never work in the real world.
But some data is beginning to suggest it’s not just possible – it’s effective.
Read more:
- Could scientists upload an animal brain to a computer?
- Here's how astronauts will soon solve murders in space
- Is artificial intelligence giving us false memories?
Partners in pre-crime
While not yet used in practice, researchers at the University of Cambridge have tested a ‘super learner’ – an ensemble of 12 AI models – which, using details from past London domestic violence police reports, was able to correctly identify 78 per cent of cases that eventually resulted in domestic homicide.
The researchers noted that if the super learner becomes more precise, it could eventually “get to the point where most names it highlights will commit a domestic homicide.”
The system that the UK government is developing on homicide risk would go beyond the police report super learner.
It would incorporate broader data on individuals with criminal convictions, including their age at first police contact, history of domestic abuse, and information related to mental health, addiction and disability.
Similar predictive tools are already being used around the world. In Germany and Switzerland, the PreCrime Observation System (Precobs) forecasts where burglaries are likely to occur based on past data.
In the US, systems like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) assess the likelihood that defendants awaiting trial or sentencing will reoffend within two years.
Another tool, Geolitica (formerly PredPol), uses historical crime data to identify potential crime hotspots.

It’s clear there are ethical problems with using such programmes – they have been heavily criticised for being biased and racist.
The Compas algorithm, for example, “is even more biased than regular people,” says Prof Gerd Gigerenzer, a director emeritus of the Max Planck Institute for Human Development in Berlin.
Introducing factors such as disability and socioeconomic circumstances has raised concerns.
Sofia Lyall, a researcher for Statewatch – the pressure group that first uncovered the UK government’s predictive murder project – said in a statement that the system “will reinforce and magnify the structural discrimination underpinning the criminal legal system.”
She added: “Like other systems of its kind, it will code in bias towards racialised and low-income communities.
"Building an automated tool to profile people as violent criminals is deeply wrong, and using such sensitive data on mental health, addiction and disability is highly intrusive and alarming.”
Black box justice
There’s also another key issue: we often don’t know exactly how these systems work.
Predictive software is typically developed by private companies, and the details of how their algorithms operate are rarely disclosed, says Gigerenzer. “There aren’t many academic studies on them,” he adds.
It’s true that some AI programs have shown success in helping police manage large areas.
For example, the creators of KeyCrime – a system that predicts the timing and likely perpetrators of robberies – claim it saved the Milan Police Department €2.5m (approx. £2.2m, or $2.9m) in just one year.
However, these systems tend to falter when it comes to predicting whether a specific individual will commit a crime.
One 2018 study put the widely used Compas algorithm to the test by comparing its predictions to those made by members of the public.
After reading brief summaries of defendants – including their age, sex and criminal history – participants were asked to judge how likely each person was to reoffend.
Surprisingly, the non-experts performed slightly better than the algorithm, despite having no formal experience in criminal justice.
That’s also the experience of many police forces. Los Angeles Police Department, for example, was an early adopter of predictive policing programmes, including PredPol, but has since given up on this approach.
“They shut it down because it didn't work,” says Gigerenzer. “It created lots of false positives and eroded community trust.”

Green is also sceptical about the effectiveness of such programmes. “When it comes to murder, there are so many different factors involved that it’s hard to see how a computer algorithm could possibly capture them,” she argues.
She claims there are also gaps and caveats in the underlying psychology.
For example, while psychopathy may be linked with criminal behaviour, the association is far from simple.
One study found that while psychopathy is linked to a higher risk of violent crime among adult offenders, the same doesn’t hold true for younger individuals.
Green’s own research has highlighted flaws in the tools commonly used to assess dark traits in prisons and forensic settings.
Many of these checklists and manuals are simply based on outdated science, she says. For example, her work shows that dark traits can manifest differently in women – yet most assessments are designed around male behaviours.
Allely also points out that many crimes can be triggered – or prevented – by small, unpredictable factors in someone’s environment.
She gives an example: imagine two 15-year-old boys with similar psychological and social risk profiles. Both are socially isolated and considering a school shooting.
But just before the planned attack, one of them has a chance encounter with a stranger – perhaps someone who offers a kind word, asks how they’re doing, or simply acknowledges them.
“Just that connection with another individual who recognises them,” she says, “could make someone change their mind.”
Gigerenzer doesn’t believe that an algorithm or AI will ever be able to predict murders accurately – no matter how much data we have.
“There are no success stories when it comes to predicting human behaviour,” he says. “Period."

His own research has shown that AI systems can work well in stable conditions where there are clear rules and little uncertainty, such as when playing chess or Go, or carrying out protein folding in biological research.
But, he argues, the minute you introduce human behaviour, they fail.
So while we could in theory have fully autonomous driverless cars, Gigerenzer argues it will never happen as long as there are humans driving or even walking alongside them.
We’re simply too unpredictable for such systems to function reliably.
Gigerenzer also warns that putting too much faith in algorithms and big data can backfire. In fact, he’s shown that complex, data-heavy models often perform worse than simple, intuitive rules.
In a 2022 study, for example, he and his colleagues looked at a model designed to predict how many people in the US would visit a doctor for flu symptoms.
The algorithm used around 50 million Google search queries to make its forecasts. But surprisingly, it was less accurate than a far simpler approach: using just one data point – the number of flu-related doctor visits the previous week.
Gigerenzer has also shown that a complex system for deciding people’s creditworthiness (how suitable for a financial loan they’re considered to be), based on 250 variables, was not better than a simpler – and actually transparent – system with just 12 variables.
This weakness in data-driven models needs to be taken seriously, he warns, as there may be concerning consequences.
Imagine you’re a judge, and an algorithm flags someone as high risk for reoffending. If you go against that recommendation and the person does reoffend, you’re the one who has to justify your decision.
As a result, it may feel safer to simply follow the algorithm’s advice, even if your own judgment says otherwise.
That’s the danger, Gigerenzer argues: algorithms can quietly take control, not because they’re always right, but because it’s risky to question them.
Proof, not patterns
So, what’s the alternative? There are some clear solutions – but they’re neither quick nor cheap.
Since economic deprivation is closely tied to substance abuse, childhood adversity and future criminal behaviour, many researchers argue that addressing poverty could play a key role in reducing violent crime.
The same goes for providing robust mental health support; investing in care early on may prevent violence long before it happens.
Green also argues it’s important to recognise that most people who have psychopathy or narcissism don’t commit murder.
She highlights the example of Dr James Fallon, the US psychiatrist who underwent a brain scan and discovered he had psychopathic features.
“Why didn’t he become an offender? Why was he living a life free of any criminal convictions?” she says. “Probably because he had a good childhood.”

On the flip side, most people who didn’t have a good childhood don’t go on to become murderers either.
“Many of us have had adverse childhood experiences. Technically, a computer could predict that we’re likely to commit a crime,” says Green.
But the vast majority of us never will. “Even if you have repeated exposure to trauma, sexual abuse or neglect in your childhood, it is possible to find your support network later on,” she says.
Luckily, there are other techniques to assess the level of threat posed by certain individuals that do away with some of these problems.
Allely says she is a fan of ‘threat assessment’, which is a detailed evaluation of potential suspects that may need monitoring, carried out by human experts in organisations such as the secret service or police.
In this type of assessment, people aren’t labelled as ‘high risk’ or ‘low risk’. “Threat assessment acknowledges the fact that we will never be in a position to actually predict who’ll commit murder,” says Allely.
Common threat assessment models include the Path to Intended Violence Model and the TRAP-18 (Terrorist Radicalisation Assessment Protocol-18).
Instead of focusing on mental health diagnoses or criminal history, these tools examine a person’s thoughts and behaviours – such as holding grievances, showing an interest in violence, or researching and planning potential attacks.
A threat assessment “wouldn’t care a jot that you have schizophrenia,” says Allely. Instead, it would probe deeper, asking if you have any auditory hallucinations and whether they are violent in nature.
This means a diagnosis in itself isn’t a red flag. But if someone with schizophrenia has violent hallucinations and gets excited by them, that might be highlighted as a risk factor.
And if they are also researching and planning an attack, that would escalate the amount of attention needed.
Ultimately, human beings are more than just a collection of data points.
Rather than being strictly ‘good’, ‘bad’, ‘high risk’ or ‘low risk’, most of us have the capability to make good or bad choices under certain circumstances.
An algorithm might flag your risk. A checklist might say you fit the pattern. But, for now, that’s a long way from knowing what someone will actually do.
Read more: