Polarisation on social media: The tools that can smash our echo chambers © Anson Chan

Social networks are built to turn us against each other. Can we fix them?

Could we make a form of social media that doesn't incentivise polarisation?

There are two interesting debates here, and one is whether or not social media causes people to become more polarised and extreme. And that’s a difficult question to answer. The easier question is whether or not social media acts as an amplifier of existing divisions in society.


What you do see is that it clearly amplifies existing tensions through mechanisms that are now well-established in research, including echo chambers and filter bubbles that really work against – rather than with – people’s psychology.

Behind the scenes, there’s an algorithm that filters the information according to your click behaviour. So the algorithm takes these existing biases that are not that destructive on their own and puts them in an environment that’s harmful for the way that people think about information and social reality.

Those mechanisms built by those echo chambers contribute to extremism and polarisation. Some good studies show that if you put people in those type of environments, they become more polarised over time. They become more extreme.

Ultimately, my sense is that this is a design question, that social media was not designed not in a way where people were thinking, “Okay, how do humans interact with technology, what could possibly go wrong here?”

It was designed to maximise engagement. And that’s really still the main currency. Polarisation results in flame wars, more sharing, more yelling and that all counts as engagement to these platforms. So what happens is that the more polarisation occurs, the more activity there is, the more engagement, the more ads make money, the more social media platforms make money.

There are solutions and people have tried these things, but it’s not straightforward. One solution from social psychological theory, for example, is that when you expose people to the stereotype or the other group, they become more empathetic. But we know from research that’s not sufficient in itself. If you expose people more to the views of others in social media, they can become more antagonised than they were before.

But what would I do? I think that it’s so surprising that social media companies don’t want to be arbiters of truth and they don’t want to moderate. But since the beginning of the internet, there have been moderators. And I think one of the things that worked really well at the beginning of the internet, especially on chat forums and blogs, was that we had moderators who were quite strict about rules of engagement.

I think what social media companies need is an honest manifesto of rules for positive engagement and to enforce those rules. And those rules shouldn’t only be decided by social media companies, but probably also the public and other stakeholders so that jointly we can devise a list of rules for positive interactions. That might include banning people with a history of attacking other people on social media, banning people who use tactics to try to get people riled up.

In a lot of our research, we’ve uncovered six broad manipulation techniques that people use to influence other people. One of them is polarisation. And there’s certain language and tactics that you can use to try to drive a wedge between two groups. You can then try to detect that language and set up a set of rules so that you can have justifiable criteria for then deterring that.

Another thing is that people currently can’t edit what they write. They don’t have an opportunity for improving what they want to say. I think it’s only human that people sometimes get angry and post something in the heat of the moment. It would be great if it was designed in such a way that people have an actual opportunity to correct themselves.

You can be nudged, and the social media platform could say, “Listen, this is not a productive conversation. We’re going to give you the opportunity to rewrite this in a more constructive way. If you don’t, then we’re going to have to remove this content.”

I think social media companies would have to sit down with behavioural scientists and design a platform that not only has the technology, but also insights from the social and behavioural sciences on how to engage people in the right way and how to disincentivise polarisation and other negative processes. I do think it’s possible, but it would be a radically different platform.

Interviewed by Sara Rigby

More on how to make the internet great again: