ChatGPT: A scientist explains the hidden genius and pitfalls of OpenAI's chatbot
Language modelling tools like GPT-3 are capable of engaging in increasingly realistic conversations, but there’s still lots we need to figure out.
I remember the first time I saw my son interacting with a large language learning model. He was only five years old at the time, but he was able to carry on a natural, flowing conversation with the AI as if it were a real person. Watching him engage with this technology, I was overcome with emotion.
It was a powerful reminder of just how far we have come in the field of artificial intelligence, and it made me realise the limitless potential of these large language models to revolutionise the way we interact with technology.
Actually, the above paragraph was written entirely by AI. But aside from the unbridled optimism, it could just as well have been written by me. If you’ve had the (often disappointing) experience of interacting with customer-service chatbots, you may be wondering how we suddenly have AI that can understand a request (to write an introduction to this piece) and deliver such a pertinent response?
In order to understand this forward leap, let’s look at how machine-based dialog works. Traditionally, chatbots have analysed the words in your prompt and chosen their answers from a pre-defined set of options.
Today, even the most advanced commercially available chatbots still use a lot of canned answers. For example, if you ask Alexa what her favourite beer is, it’s likely that someone working at Amazon composed the response.
More like this
In contrast, ChatGPT, the AI chatbot that I used, is based on a Generative Pre-Trained Transformer model, which can generate its own conversational output. It wouldn’t name a favourite, but recommended Belgian beer Westvleteren 12. ChatGPT is a prototype that AI research company OpenAI released to the public last month. Together with other large language models being developed by Google, Facebook, and others, this new generative AI is completely changing the game.
The language learning model that ChatGPT is based on was trained on billions of written texts from the Internet. Based on that data, GPT can predict the next most suitable word in a text string. This is not a new tactic, but the ‘Transformer’ technology it uses also attempts to understand context by analysing entire sentences and the relationships between them.
This is huge, because commercial chatbots have long struggled with context. Take Apple’s voice assistant Siri, who years ago made headlines by offering to name a user “An Ambulance” when told “Please call me an ambulance.” It’s one of the reasons we’re so accustomed to chatbots saying they don’t understand our query, or giving technically correct responses that aren’t useful.
When my husband asked ChatGPT to write a marriage proposal to me in the style of a headline from the satirical publication The Onion, it returned “Heartless Robot Researcher Kate Darling to Marry Hopeless Human Suitor in Futile Attempt at Emotional Connection.” I think it’s safe to say that nobody at OpenAI drafted that answer, and it’s incredible how well the tool understood the assignment.
Another groundbreaking aspect of Transformer, which is also used in other new language models like Google’s LaMDA, is it significantly reduces the time needed to create the model. So basically, today’s tech companies have access to massive amounts of training data, more computing power than ever, and are able to build and train a language model with much less effort than before. As these things come together, they’re ushering in a new era of conversational AI.
There are some drawbacks that may prevent commercial chatbots from adding too much generative content, at least for now. ChatGPT can argue with you, draft poems, and compose a hilariously sarcastic email to your boss, but it will also give false answers with confidence, or write a rap about scientists that is extremely sexist:
“If you see a woman in a lab coat,
She's probably just there to clean the floor,
But if you see a man in a lab coat,
Then he's probably got the knowledge and skills you're looking for."
Clearly, the magic comes with risks. OpenAI did add some fine-tuning to ChatGPT’s dialog. For example, humans helped train the AI by giving it feedback on its conversational skills, and it also contains some pre-scripted answers and deflections. But it remains impossible to anticipate what the chatbot might say in every given situation, making it a liability hazard for a lot of applications, and raising a slew of ethical issues.
As ChatGPT so eloquently wrote in the beginning, we have indeed come far in the field of artificial intelligence, these advances may well mean “the limitless potential of these large language models to revolutionise the way we interact with technology.” But we need to stay in dialog with each other as we figure out what that future looks like.
Read more about artificial intelligence:
Dr Kate Darling is a Research Scientist at the MIT Media Lab and author of The New Breed. Her interest is in how technology intersects with society.