Science Focus - the home of BBC Science Focus Magazine
Dr Pragya Agarwal © Asadour Guzelian

Pragya Agarwal: "We learn biases through our lifetime. And because we learn them, we can unlearn them as well"

Published: 08th April, 2020 at 00:00
Subscribe to BBC Science Focus Magazine and get 6 issues for just £9.99

Behavioural scientist Dr Pragya Agarwal explains why we all have implicit biases and how we can unlearn them.

No matter how open-minded we consider ourselves to be, all of us hold biases towards other people. Dr Pragya Agarwal explains where these biases come from and why it’s important for us to recognise and unlearn them to help make the world a better, fairer place.


Tell us about the types of biases.

An explicit bias is something that is very clear. If somebody purposefully discriminates between two people based on their race or skin colour or the university they went to, and it’s clear that this discrimination is happening or these prejudices exist, then that is an explicit bias.

But there are also implicit ones, which are more difficult to identify as biases. These affect our decisions and our actions, but they are not very clear. For instance, making fun of somebody, or preferring one person over another: if someone looks at a CV and says, “Oh, I think this person is more qualified than the other,” just because they went to a certain university.

All of us also carry a conformity bias; we are more attracted to people who are more like us. Those kind of biases are not easy to explicitly mark out.

Read more about psychology:

Why do we have biases?

In evolutionary terms, we are designed to differentiate between people, and make those quick decisions between people who belong to our group, or our tribe, and those who don’t. That was kind of a survival strategy because resources were limited and people had to say, “this is a threat to me or to the limited resources, and so this person is an out-group.”

We make these quick decisions about whether this person or an object is a threat, or should we fear this person. These kind of in-group, out-group demarcations are made very quickly, because we have to process so much information. There’s no time to take every bit of information on a rational, logical level.

© Asadour Guzelian
© Asadour Guzelian

So, a lot of this is processed on the basis of our previous experiences. We make these quick matches between our previous experiences, say, in the past, this kind of person or situation was a threat to us, so that is what this will be.

That’s how these immediate stereotypes are formed. We quickly make demarcations and distinctions and labels, as a way of processing information really quickly before we can take it to a rational level in our brain.

Are there benefits to this?

Absolutely. Say I go shopping and want to choose a brand of cereal in the supermarket. If I took every bit of information around me and weighed it up and tried to make an independent decision based on clear analysis, then there’s not enough time. I would be stuck with every decision in the world.

But there are obviously negative sides to it in certain situations and where these decisions actually make an impact. They have life and death impact. They’re more important than just choosing a brand of cereal.

Once you’re aware of unconscious biases, can you train yourself out of them?

There’s a whole debate about whether unconscious bias is something we’re born with or whether we can unlearn them. Personally, I believe that a lot of these biases are learned and shaped through our experiences. The way that we have been brought up, the cultural and social context, the media we’ve been exposed to, the things that our tribe and our community tell us, the things we talk about or we read in newspapers.

We learn them through our lifetime. And because we learn them, we can unlearn them as well. I believe that once we become aware of them and we reflect on them, we can change our attitudes accordingly.

Read more about biases:

So, things in your childhood appear later in life as unconscious biases?

Yes. In my book, I talk about developmental psychology, and how children, as they’re growing up, start forming the sense of in-group and out-group associations. That’s a natural response for children, because they’re making sense of their own identity, their own place in the world.

It’s largely shaped by who they see around us, who they see as foes, who they see as friends, who they find comfort with. There’s no real prejudice involved in that stage, but prejudices are bolstered and reinforced by messages they might get from their parents, or from their education, or the books they read and the TV that they watch.

Do you see a future without these biases?

No. I don’t think so. I think change will happen, and is happening slowly – very slowly, because there is always resistance to any kind of change in status quo. People who have privilege will always resist, because that threatens their status, and that means they worry about what their position and place in society will be once their status changes.

© Asadour Guzelian
© Asadour Guzelian

It’s important that we talk about it, that we become aware of things [that arise from biases] like micro-aggressions. Things that were acknowledged and ingrained as part of our culture, and accepted as okay, even though it hurt the person who was being marginalised or victimised.

We cannot just do away with all our cognitive biases, all our implicit biases. Bias is not always negative. But we can do away with the stereotypes, prejudices and discrimination that is linked to some of the biases that we carry.

How does social media fit into this?

Some of the discourse around biases and prejudices can become quite heated, because they can feel like a judgment on our whole identity. We say that you are biased, so you are a bad person.

What I’m trying to do in my book is, by giving scientific evidence and bringing in different case studies and theories, to help us understand that we can all unlearn some of our toxic behaviours.

Yes, social media is creating echo chambers and filter bubbles. Social media is strengthening the sense of belonging in a particular community, that “I belong in a particular tribe, so I cannot engage with anybody who does not belong in that.” Again, we’re falling back on primal in-group, out-group tendencies through these mediums.

But I also think that these divides are being reinforced by the climate in which we live.
If a marginalised community starts talking about and pushing back against prejudices, then there will be further divides initially. But having more evidence, and open-minded, non-judgmental platforms for these discussions is important.

How do you study people’s biases?

It’s difficult to measure and quantify these things. In my book I critique some of the tools and methods which have been considered as the absolute one way that we can measure bias. Like the Implicit Association Test (IAT), for instance.

The IAT was proposed by Harvard psychologists and it has been used for a long time, because we don’t have any other test. It’s a useful tool, but some people think it gives a measurable value for what implicit bias is. It works on the basis of association and in that way, it tells us what our implicit biases are. If I associate a certain thing with a certain thing.

© Asadour Guzelian
© Asadour Guzelian

For example, if I say apple = green all the time, then obviously I believe, firmly, that apples are always green and they can never be red. That’s very simplistically speaking. And the IAT will give these associations a value, but that number doesn’t really give you the absolute marker for what kind of biases we carry.

I see the IAT being used a lot by organisations. They call it ‘diversity training’, or ‘implicit bias training’, but it’s not training you to understand about what implicit bias is, and how to tackle it.

Can a computer be biased?

We might think that AI is neutral – that is certainly how people promote AI-based hiring and recruitment platforms. People say that, because it’s technology, it will do away with human biases. But that’s completely incorrect because machines are not black boxes; they are being designed by humans and building on the data that exists.

So, all the biases from the team, developers, from the data, are really enforced and built into the system itself. But when these systems and technology again create these biases, it can perpetuate the biases that already exist in society as well, so it becomes kind of a vicious cycle. We need to be so careful when we use technology and machine learning.

But this isn’t just high-tech stuff, is it? It’s in our homes and in our phones.

Yes, absolutely. The problem with tech, and in STEM, is that most of the developer teams [that design the tech] are largely male-dominated. There are studies that talk about instances of sexism and misogyny in Silicon Valley.

Those kind of biases within teams can get built into the technology or systems they’re creating. So, giving voice assistants feminine voices, or female names, creates and reinforces the notion that women are in a subservient or an assistant role. That they can be talked to in a dominating way, and they will not retort or stand up for themselves.

Read more interviews:

There was a report by the UN a couple of years ago which revealed that, in reply to hearing something sexually demeaning, a voice assistant would only say, “I’d blush if I could”.

I think a lot of organisations are beginning to take these concerns on board. I know that there have been changes in the way voice systems have been designed, and the things they can say in response to sexual harassment statements.

But we need to address these things at the beginning, before the tech runs away from us.


  • Dr Pragya Agarwal is a behavioural and data scientist, ex-academic, and a freelance writer and journalist, who runs a research think tank called The 50 Percent Project, which is for gender equality.

Sway: Unravelling Unconscious Bias by Pragya Agarwal is out now (£16.99, Bloomsbury Sigma).

Sway: Unravelling Unconscious Bias by Pragya Agarwal is out now (£16.99, Bloomsbury Sigma).


Amy BarrettEditorial Assistant, BBC Science Focus

Amy is the Editorial Assistant at BBC Science Focus. Her BA degree specialised in science publishing and she has been working as a journalist since graduating in 2018. In 2020, Amy was named Editorial Assistant of the Year by the British Society of Magazine Editors. She looks after all things books, culture and media. Her interests range from natural history and wildlife, to women in STEM and accessibility tech.


Sponsored content