Science Focus - the home of BBC Science Focus Magazine
Is AI sexist and racist? © Getty Images

Is AI sexist and racist?

From facial recognition to digital assistants - AI is all around us

We all use facial recognition to unlock our phones. And we all view online content automatically suggested to us. But some of us have rather more success with artificial intelligence (AI) than others.


A study of face recognition AIs discovered that systems from leading companies IBM, Microsoft and Amazon misclassified the faces of Oprah Winfrey, Michelle Obama and Serena Williams, while having no trouble at all with white males.

Even the voices of digital assistants such as Cortana or Google Assistant have female voices by default, perhaps unconsciously reinforcing the stereotype of female subservience in the minds of millions of users.

The bias of these AIs is caused by the fact that the current designers of most AIs are largely white males in their 20s and 30s without disabilities. They’re generally people who grew up in high socioeconomic areas, often with similar educational backgrounds.

Perhaps unsurprisingly, the resulting AIs are created and educated using narrow and biased datasets that are unrepresentative. For instance, a US government dataset of faces collected for training AIs contained 75 per cent men and 80 per cent lighter-skinned individuals. There’s nothing deliberate about this – the AI developers simply didn’t notice because they had no experience of diversity themselves.

Thankfully the tide is turning, and today most major tech companies are trying to identify unwanted biases and eradicate them from our technologies.

Read more:


To submit your questions email us at (don't forget to include your name and location)


Sponsored content