©Kate Darling via Lensa

Stop blaming Avatar-generating AI for needlessly sexualised images – fault the creators instead

Dr Kate Darling argues that AI designers should take more responsibility for the content their creations produce.

Try 3 issues for £5 when you subscribe to BBC Science Focus Magazine!
Published: January 10, 2023 at 7:00 am

In December 2022, the Internet was abuzz with a new app. For £1.79, Lensa AI would generate 50 artistic portraits based on uploaded headshots, quickly topping the download charts as users shared the pictures on social media. When some people complained of sexualised and disturbing body modifications, the app creators put up a note that they couldn’t guarantee non-offensive content. But when artificial intelligence (AI) makes blunders, this type of disclaimer isn’t enough.

When I tried Lensa AI’s magic avatar feature for myself, I selected my gender and uploaded 10-20 headshots. It quickly returned flowered fairies, fantasy warriors, and other creative figures, all with recognisable features. Magical, indeed, except two of my images were nude, and oddly, sporting giant breasts. Other female-identifying users also reported being portrayed naked, despite uploading only professional headshots.

Aside from undressing women, the app also appears to 'beautify' their faces and slim down their bodies. Other users reported that their dark skin was lightened, and an Asian journalist discovered that her images were overly sexualised compared to her white colleagues’. From a technical perspective, it’s sadly not surprising that these AI portraits incorporate harmful stereotypes, including fetishising Asian women.

The reason is 'garbage in, garbage out', a saying that applies to most of today’s AI systems. The output isn’t magical, it depends mainly on what we feed into them. Lensa AI uses Stable Diffusion, a model that was trained on 5.85 billion pictures scraped from the internet. If you indiscriminately grab material from the web, you invariably wind up with an app that likes to draw big boobs on my small, perfectly fine chest.

Generative AI models need such massive amounts of training data that it’s difficult to curate. And while it’s possible to add certain safeguards, it’s impossible to anticipate everything the AI will create. In order to release these tools at all, it makes sense that companies want people to use them at their own risk. For example, Open AI’s ChatGPT website warns users that their chat tool may generate incorrect information, harmful instructions, or biased content.

But these companies are also benefitting from our willingness to view the AI systems as the culprits. Because autonomous systems can make their own content and decisions, people project a lot of agency onto them. The smarter a system seems, the more we’re willing to view it as an actor on its own. As a result, companies can slap a disclaimer on the front, and a lot of users will accept that it’s the AI’s fault when a tool creates offensive or harmful output.

The issue goes well beyond 'magical' body edits. Chatbots, for example, have improved since Microsoft’s infamous Tay began spewing racist responses within a few hours of its launch, but they still surprise users with toxic language and dangerous prompts. We know that image generators and hiring algorithms suffer from gender biases, and that the AI used in facial recognition and the criminal justice system is racist. In short, algorithms can cause real harm to people.

Imagine if a zoo let a tiger loose in the city and said “we did our best to train it, but we can’t guarantee the tiger won’t do anything offensive.” We wouldn’t let them off the hook. And even more so than the tiger, an AI system doesn’t make autonomous decisions in a vacuum. Humans decide how and for what purpose to design it, select its training data and parameters, and choose when to let it loose on an unsuspecting population.

Companies may not be able to anticipate every outcome. But their claims that the output is simply a reflection of reality is a deflection. Lensa AI’s creators say that “the man-made unfiltered data sourced online introduced the model to the existing biases of humankind. Essentially, AI is holding a mirror to our society.” But is the app a reflection of society, or is it a reflection of historical bias and injustice that a company is choosing to entrench and amplify?

The persistent claim that AI is neutral is not only incorrect, it obscures the fact that the above choice is not neutral, either. It's nifty to get new profile pics, and there are many more valuable and important applications for generative AI. But we don’t need to shield companies from moral or legal responsibility in order to get there. In fact, it would be easier for society to lean into AI’s potential if creators were accountable. So, let's stop pointing our fingers at AI and talk about who really generates the outcomes of our technological future.

Read more about artificial intelligence: