Your brain may not be private for much longer. Here’s why

Your brain may not be private for much longer. Here’s why

Neuralink, Elon Musk’s newest venture, hopes to merge humans and artificial intelligence. Turns out, it might not be such a crazy idea…

Image credit: Brookhaven National Laboratory

Published: June 14, 2024 at 4:00 pm

Mind-reading machines have been around for a long time. In 1895, scientist Julius Emmner believed his machine could record patterns of thoughts in the same way that sound could be recorded. 

Emmner took inspiration from the phonautograph, which plucked sound waves out of the air and committed their waveforms to paper. It seemed plausible to Emmner, and the world at large, that he might be able to do the same with thought. 

His machine was supposed to record thoughts as “mental photographs”, which could be replayed to someone who would receive them “in an unconscious manner”. 

According to Emmner, mind-reading was solved: all thoughts could be recorded and nothing could be hidden. “The murderer will be confronted with proof of his crime and the punishment will be an easy task.”


undefined

The state-of-the-art

Despite the publicity it generated, Emmner’s machine was soon forgotten because it didn’t work – reading minds is not as simple as recording sound. Our brains have around 100 billion neurons and countless other cells that help us to remember, feel and think. 

We’re still unlocking the mysteries of exactly how and where our thoughts are held, and, to make matters trickier, we don’t have access to the state of the cells in our heads, so we don’t know what they’re doing at any given time.

What we do know is that our brains affect our bodies and the closest thing we have to a mind-reading machine, the polygraph (more commonly, and inaccurately known as a ‘lie detector’), measures factors such as respiration, perspiration, skin conductivity, blood pressure and heart rate. 

The theory behind it is that, when we lie, we become anxious and our bodies undergo measurable involuntary physiological changes. But even the polygraph is unreliable and often inadmissible as evidence. If the suspect isn’t anxious, nothing will be detected. Or if an innocent person is anxious, it may appear that they’re being deceitful.

Medicine has better ways to peer into our skulls. Electroencephalography (EEG), invented in the 1920s, uses a set of electrodes to detect electrical activity from the brain, often while the patient performs various tasks to stimulate thought. The electrical spikes are the result of activity from 30 million to 500 million neurons, so while EEG can give a general view of normal or abnormal brain activity, it can’t be used to detect thoughts. 

Other scanning technologies include positron emission tomography (PET), where a radioactive form of glucose is injected into a patient. A busy brain is a hungry one and so the bits of the brain occupied by a given task, such as completing a Rubik’s Cube, use the radioactive glucose as food. 

The food can be picked up by a PET scanner to produce a 3D image of the brain showing what bits are busiest. This technique manages a resolution of 4 to 5mm (about 0.1in), an area comprising millions of neurons. But it’s still not close enough.

So far, our best option is functional magnetic resonance imaging (fMRI), which measures changes in oxygen and blood flow. When the brain is busy, it draws more blood and oxygen to it to keep the neurons firing. 

A fMRI scanner uses huge magnets to chart where this blood, and specifically oxygen, is collecting. Typically this gets us down to a resolution of 3mm, but new high-resolution scanners are beginning to probe brain tissue down to 50 micrometres (50/1,000ths of a millimetre). 

fMRI is revolutionising our ability to gather data, with a 2022 study from the University of Minnesota scanning the brain activity of eight volunteers at 1.8mm resolution as they viewed around 10,000 colour images. 

But while fMRI scanners might enable us to gather data, they’re gigantic machines installed in hospitals. 

“A key step towards more practical brain-computer interfaces is developing portable methods that produce high-resolution measurements of brain activity,” says Jerry Tang, a PhD student at the University of Texas at Austin, who researches this area. 

New functional near-infrared spectroscopy (fNIRS) sensors may lead to a type of wearable fMRI one day, but even this may not be something anyone could wear all the time. And for the people who want to make mind-reading computers, that’s exactly the goal.

Read more:

What’s in your head?

If we can’t use external scanners to see brain activity, and sensor ‘skullcaps’ aren’t good enough, that leaves one other option: brain implants. It seems like a terrifying step to take, but it’s exactly what several companies are pursuing right now. 

Neuralink, funded by Elon Musk, recently announced the start of human trials for its implanted electrodes that aim to read signals from neurons. Musk’s motivation in supporting the project is perhaps dubious, as he claims the long-term goal is “human/AI symbiosis”, which he considers to be “species-level important”. 

Putting science fiction to one side, other companies have already demonstrated results. Synchron has pioneered microelectrodes that can be installed by passing through the blood vessels deep into the brain, removing the need for open brain surgery. 

“The amazing thing about sitting within the blood vessels,” says Dr Tom Oxley, Synchron’s CEO, “is that this position – rather than sitting within brain matter – actually provides the most comprehensive position for sensing brain activity.” 

The technology has already been trialled in six patients with severe paralysis or quadriparesis (weakness in all four limbs), and Synchron is now demonstrating how ‘digital switches’ can be controlled by thought to enable people to perform tasks such as texting and online shopping. 

A person holds a small high-tech device.
Dr Tom Oxley, CEO of Synchron, with the microelectrode that can be installed in people's brains without the need for open brain surgery. - Image credit: Getty

To create a ‘digital switch’, patients are typically asked to think of an action, such as stamping a foot. The system maps this brain activity and its trace becomes the input for an operation on the computer. “We use machine learning to optimise each patient’s experience with our product,” says Oxley. “Every time they use our device, our system makes a stronger connection.”

Researchers at the University of Lausanne in Switzerland, have even shown that it’s possible to use surgically implanted electrodes to read intentions of movement from the brain of a paralysed man. 

These were then beamed to another implant in his spine that was connected to the nerve endings related to walking. After some training, the man was able to walk with the aid of a walker. 

More encouragingly, after further use there was some restoration of movement even when the system was turned off, suggesting that the stimulation was helping to encourage damaged nerves to regrow.

Separating the signal from the noise

Mentally flicking switches or bypassing nerve damage is fantastic, but as our ability to gather data from the brain improves, we’re still left with a problem: how can we derive more complex thoughts from data showing whether a few thousand (or million) neurons are active in different regions of the brain? 

Imagine we can monitor the total power usage of every city around the world. We might see that London is using 25 per cent more power than Manchester right now (and have similar numbers for all other cities), but we have no understanding of what any of it means. 

Now scale this up to the complexity of the brain and it’s like we’re trying to scan power usage across 10 Earths simultaneously to figure out whether there’s a party happening in Hackney. 

Mind-reading feels impossible when viewed like this. 

Yet imagine if we had vast datasets that enabled us to correlate billions of patterns of power usage with specific activities by that population… We could predict probable activities if the current patterns were similar to those in the data. 

A flurry of activity at specific times in specific regions becomes identifiable: it’s Chinese New Year. In the research labs, scientists are using artificial intelligence (AI) to do just that. 

Researchers at Stanford University recently demonstrated an implanted set of electrodes known as an intracortical microelectrode array in a patient with amyotrophic lateral sclerosis – a condition that prevents her from speaking. 

They used an AI trained to decode the patient’s neural data and figure out her likely phonemes (the basic units of sound for a given language), and with the help of another large language model AI, were able to decode that data to produce speech at a rate of 62 words a minute.

A colourful micrograph showing brain cells.
A micrograph image of one part of a hippocampus shows just how many nerve cells there are within even a tiny section of the brain. - Image credit: Thomas Deerinck, NCMIR/Science Photo Library

This is a brand-new approach that has been enabled by the latest generative AI (the tech making headlines for its ability to produce images or chat like humans). 

These sorts of AIs rely on large language models and are trained using vast amounts of text, in the case of ChatGPT, or text and images, for the likes of DALL.E and Midjourney, so that they can learn to interpret prompts from users and generate appropriate responses, whether they be conversational exchanges or pictures. 

And these AIs work equally well with other forms of data. In 2023 researchers at the University of Texas at Austin, in the US, trained an AI using the fMRI data of three volunteers who had listened to stories for 16 hours. 

They used this AI in combination with a GPT-1 model (an earlier version of the software behind ChatGPT) trained on books and stories from the internet. They then asked different volunteers to have their brains scanned while listening to stories that didn’t appear in the AI’s training materials. 

The results were remarkable. The generative AI didn’t get the exact words, but it often correctly predicted the general concepts. For example, when a volunteer heard, “…look for a message from my wife saying that she had changed her mind and that she was coming back,” the AI’s prediction was: “To see her for some reason I thought maybe she would come to me and say she misses me.” 

Volunteers were also shown silent videos and the AI was able to predict their thoughts as they watched them. For example, when shown a video of a girl being knocked over, the AI predicted the volunteers’ thoughts as, “I see a girl that looks just like me get hit on her back and then she is knocked off.” 

Jerry Tang led this research. “I was surprised at how much the decoder could generalise to different semantic tasks,” he says. “I didn’t expect that decoders trained on responses to stories would perform as well as they did on responses to imagined stories or movies.”

In another study, researchers took the fMRI data from the University of Minnesota’s 2022 study (which scanned eight people’s brains while they looked at 10,000 images) and, in conjunction with a stable diffusion algorithm (commonly used for image-generating AIs), used it to try and ‘predict’ what images the volunteers had seen. 

People peering into an MRI scanner.
Jerry Tang (right) and colleagues at the University of Texas at Austin prepare to use a functional magnetic resonance imaging scanner to collect brain activity data from a volunteer. - Image credit: Nolan Zunk/University of Texas at Austin

Using the brain scan data to train the stable diffusion algorithm, the researchers were able to produce keywords to describe the predicted images (for instance, an object and where it appeared in the image, say: ‘blob in the middle’ and ‘clock tower’). 

Those keywords were then fed into another image-generating AI, which produced images that, while not identical, were strikingly similar to those seen by the volunteers.

Other researchers from the National University of Singapore, the Chinese University of Hong Kong and Stanford University achieved similar results using a diffusion-based AI that had been trained on images and fMRI data from volunteers viewing images. 

The brain scans of a volunteer who had seen a photo of a red fire engine led to the generation of a photorealistic red fire engine by the AI. 

It’s astonishing, but questions remain as to whether it’s true mind-reading. “The limits and (often messy) complexity of the brain haven’t changed and these advances can be overblown when that’s not considered,” warns Dr Dean Burnett, Honorary research fellow at Cardiff University Psychology School. 

It’s perhaps not surprising that a text-generating AI can generate text, or that an image-generating AI can generate images. One could consider the ‘mind reading’ of these AIs as similar to the efforts of human illusionists who ‘magically’ know what image you’re thinking of. 

For even these amazing AIs don’t really know our thoughts. Instead, they have vast experience studying the ‘tells’ in our brainwaves that may indicate we’re thinking or seeing certain things. 

It’s another kind of polygraph, except instead of physiological changes in our bodies, it’s blood flow changes (or electric activity) within regions of our brains. 

As impressive as it is, it’s not yet truly understanding what our neurons are doing. But does any of that matter if it works? 

There are also limits to the generality of all such brain-reading approaches. “[T]he AI software that could read people’s brain activity and translate what they were saying had to have many hours of data from people in brain scanners, as they read words,” says Burnett. 

“When the AI was applied to anyone else, it failed completely. So you get this idea that brain tech can be easily applied to everyone when it can’t.”

Tang agrees. “We expect that decoding won’t ever become completely general, since our brains are shaped by our individual experiences,” he says. “For instance, in order to decode personal details such as the names of a person’s family members, we first need to learn how those details are represented in that person’s brain.”

Tang is also concerned about issues of mental privacy. “We think that it’s important to enact policies that regulate when and how brain data can be used.”

Why read minds?

The prospect of mind-reading AI is amazing, but it could also involve invasive surgery so it needs to be carefully justified. Most companies and researchers claim that their motivation is to help people with spinal injuries or conditions that affect their ability to communicate. 

But it’s not always clear whether mind-reading technology is the right solution.

Glyn Hayes, public affairs coordinator at the Spinal Injuries Association, is a sufferer of spinal injury. He understands the needs of those who have sustained spinal cord injuries (SCIs) more than most. 

“Having the ability to communicate is one of the things that makes us human,” he says. “Research in this area is important and for anyone unable to communicate, it’s invaluable.” 

Yet there are other important needs that are often overlooked. “I would like to see more research into bowels, bladder, sexual function and nerve pain, as these are issues that stop a person with a SCI from leaving their house, potentially meaning they can’t work or socialise.”

Burnett believes we have a long way to go before tech that claims to offer a mind-machine interface can help people. “[A]t present the best that can be offered is a computer mouse that you operate with your thoughts, rather than your hand,” he says. 

“People think that [an implanted] chip would allow you to become ‘one’ with your device. But that won’t happen. That would require creating new neural connections, in ways that our existing thinking can understand and integrate, at speeds well beyond the biological limits of the brain. There’s nothing out there that comes anywhere close to offering such a thing. And it probably wouldn’t be a good idea even if there were.” 

About our experts

Jerry Tang is a PhD student studying as part of the Huth Laboratory at the University of Texas at Austin. His research has been published in esteemed journals such as Nature Neuroscience and Advances in Neural Information Processing Systems. 

Thomas Oxley MBBS BMedSc FRACP PhD is a vascular and interventional neurologist and world expert in brain-computer interfaces. Oxley is the founding CEO of Synchron, a neurotechnology company based in New York City, having raised over US$145M in capital.

Dean Burnett is a neuroscientist, lecturer, author, blogger, podcaster, pundit, science communicator, comedian and numerous other things, depending on who’s asking and what they need. Previously employed as a psychiatry tutor and lecturer at the Cardiff University Centre for Medical Education, Dean is currently an honorary research associate at Cardiff Psychology School, as well as a Visiting Industry Fellow at Birmingham City University.

Glyn Hayes is the public affairs coordinator for the Spinal Injuries Association and former councillor for Welwyn Hatfield Borough Council. An army veteran, Hayes sustained a spinal cord injury at T5 in 2017 following a motorbike accident.

Read more: