Domestic disinformers in the United States are probably the best in the world at creating and spreading disinformation. They don’t really have to launch a campaign: they just plant seeds that grow with the infrastructure that exists online, and that are distributed with the help of the acolytes they’ve created.


They partner with influencers who launder the messages for their followers to consume. Pair that with microtargeting – sending really specified pieces of information that they learn about us using things like Facebook groups combined with the public voter file that shows how we voted, what our issues are, who we care about, what we’ve donated to – and it’s much more effective than anything Russia or China could do.

I mean, they’re good at following events and understanding what’s going to be the meme du jour, but in terms of the level of disinformation that’s currently being produced and spread, I don’t think they could hope to achieve something like this. It’s endemically American.

It’s really hard to regulate the internet. This is perhaps the most complicated regulation that any democracy has ever pursued because of the ways that the internet has become enmeshed in our lives. It touches commerce, elections and freedom of speech. How do you disentangle all of that to create regulation that works for democracy? And should that be the government’s job?

More like this

I’ve been thinking a lot about TikTok lately, the way it works, the way it is so ultra-personalised. You can’t really do an influencer campaign on TikTok. You can’t pay for something to go viral. You also can’t create a fake account and expect something to go viral.

TikTok is dedicated to being the last happy corner of the internet. And because there are so many teens on the platform, the people at TikTok are not reserved about taking content down if they think it violates their policies – things that are blatantly false, or harassment. But TikTok is a private company, owned by a Chinese businessman. They make their own rules.

Certain countries need to deal differently with regulatory questions than we do in places like the UK and US. The Ukrainian government subsidises their telecom companies, along with access to all the main apps. And so, when we say we’re going to quit Facebook, that’s nice for us, but in Ukraine they have a subsidised dependence on social media platforms that we don’t. If we regulate, how are people in these countries that depend on these platforms for everyday, free communication – everything from baby pictures to work stuff to government staff – going to operate?

I used to use the term ‘media literacy’, but now I talk about ‘information literacy’. Being information literate is broader than understanding how social media platforms work and how they target you. It’s about the whole ecosystem that a consumer of information online needs to understand to have the full context, like why am I being targeted with this?

In terms of how we fight this, there’s no easy solution. Fact checking people individually doesn’t work. People tend to stand their ground when they see something that tells them that they’re wrong.

I’ve found that getting into a conversation has some success. Ask them, “Why do you believe this article? What appeals to you about it?” Understand where they’re coming from and then hopefully equip them without saying, “you’re wrong.” Instead, we can say, “I know you care about child trafficking. Here is a better source than that.” And then, over time and with better evidence, they come to change their minds.

The emotional approach takes so much more time and energy, but it’s what you need to do to counteract disinformation online.

Interviewed by Aleks Krotoski

More on how to make the internet great again:



Nina studies the intersection of democracy and technology. She is the author of How To Lose The Information War: Russia, Fake News, And The Future Of Conflict.