‘Would you swear at ChatGPT if it had a head?’: RI Christmas lectures preview

Prof Mike Wooldridge gives us an early peek at the demonstrations his Christmas Lectures will include - plus an insight into how he thinks AI is going to change the world for children.

Save 50% when you subscribe to BBC Science Focus Magazine!

Photo credit: Paul Wilkinson Photography, Alamy

Published: December 24, 2023 at 9:00 am

In The Royal Institution’s almost 200-year history of Christmas Lectures, the topics covered have swept through chemistry, mechanical engineering and astronomy to psychology, climate change and, this year, artificial intelligence (AI).

Many people are concerned about how AI is going to change our healthcare, careers and entertainment, but what do the experts think? Mike Wooldrige is a professor of computer science at the University of Oxford and has been selected to deliver this year’s Royal Institute Christmas Lecture on AI.

What was your reaction to being selected to give this year’s Royal Institution’s Christmas Lecture?

I was stunned. Flabbergasted! I thought: have they contacted the wrong person?

The Christmas Lectures were one of the treats over the Christmas period for me – I was really into science as a young kid and I can remember watching them in the 1970s.

I remember watching Carl Sagan, who was an astrophysicist, talking about the planets and being absolutely entranced by his lectures. More recently we had Sir David Attenborough. David Attenborough!

Following in their footsteps is really quite something. For so many British scientists, the Christmas Lectures were one of the things that kindled their interest in science. To be part of that legacy is amazing.

The Christmas Lectures are famous for their iconic props. What can we expect to see in your talk?

The first thing the RI team told me is that it’s a tradition to have an explosion and a dog (ideally at different times). So there will be an explosion and there will be a dog, but I’m not saying any more than that.

What I will say is you’re going to see lots of demos of how AI works in computer games. A lot of children spend their leisure time gaming and they may not realise that there’s a lot of AI behind the scenes in computer games.

So we’re hoping to get some kids out of the audience to play with some of the most sophisticated AI game technology in the world. Lucky kids!

We’re also going to do a live Turing Test. Until the last couple of years, we didn’t have computer programs that could realistically pass the Turing Test [and be able to converse in a manner that’s indistinguishable from a human]. Then all of a sudden, we have programs that could plausibly pass it.

As an AI researcher who’s been working in this field for so long, to suddenly have that opportunity is enormously exciting. So we’re going to see what happens – I don’t know how it’s going to come out.

The aim is to demystify all of this. So we’ll also play some games that explain how ChatGPT works. We’ll show people what’s going on ‘under the hoods’ of these AI programs. I hope to really show that there’s nothing to be afraid of with AI.

How do you think today’s children are forming relationships with AI?

There’s an expression for what are called ‘digital immigrants’ and ‘digital natives’. The World Wide Web didn't happen until I was nearly 30 years old, so I didn't grow up with the Internet. I'm a digital immigrant – I moved into this area later in life.

My kids have not just grown up with the Internet, but they've grown up accustomed to absolutely ubiquitous Internet access. So they are digital natives. This is the generation that we're going to be talking to in the Christmas Lectures.

It's the first generation that's going to grow up with tools like ChatGPT around them all the time. All I can hope to do is prepare them for that so that when they encounter that, they go in with their eyes open: that I get them excited about the possibilities as well as aware of what the risks are.

Professor Mike Wooldridge sitting in a computer lab, holding an apple and having a book passed to him by a robot. His Christmas Lecture series will be on the subject of AI
Prof Mike Wooldridge investigates how logic, computational complexity and game theory interacts in computational system - Photo credit: Paul Wilkinson Photography, Alamy

Some parents and teachers worry about children becoming endlessly diverted by AI entertainment, or over-relying on it for their homework. What would you say to them?

Fears about technology being a threat to civilisation are nothing new. I remember in the 1970s, when pocket calculators became widely available, people had exactly the same fears: they worried that kids wouldn’t learn how to do arithmetic and that they would do all their homework on the calculator.

But mathematics didn’t collapse. What pocket calculators did for us is they just relieved us of a very tedious burden. Pocket calculators enable many people to do the arithmetic that they don’t particularly enjoy doing, and do it more quickly and more reliably.

My guess is that, in the long run, that's exactly what will happen with AI technology. It’ll just be another tool, similar to pocket calculators.

Thinking along those lines, I hear lots of teachers are hoping that AI might soon be able to do their marking for them. Do you see that happening? How else could AI be used in the classroom?

I’d be nervous about using it for marking right now, because AI gets things wrong a lot. I think it could [perform that task] ultimately, but human judgment is very important for marking.

But I think AI will have its place in the classroom. One of the unexpected benefits of tools like ChatGPT is that they’re really good for brainstorming. It can prime a teacher on different subjects and give them ideas about how to present things in new and interesting ways.

The dream of AI in education is that what we end up with is AI tutors. That is, that you end up with something like ChatGPT that takes the role of a teacher.

I think teachers are safe in their jobs for the foreseeable future though. We’ve seen endless new technology in teaching, but, fundamentally, teaching is very similar now to what it was 200 years ago.

Some studies are investigating what happens to a child’s psychology if they’re consistently rude to an AI in a way that they wouldn’t be with a human. Should we encourage children to respect AI and see them as friends?

On the one hand, I think this technology is just a tool. Swearing at an Excel spreadsheet (which I do most days) is not intrinsically wrong – and neither is swearing at ChatGPT.

Where it becomes difficult is if the AI is presented with a human persona. Abusing an Excel spreadsheet doesn’t feel wrong, but how would you feel about abusing a humanoid robot with something like a human head and eyes?

I think most of us would feel, at the very least, uncomfortable about that – because it somehow takes it a lot closer to abusing human beings. So I think one of the important principles about AI is that it should never be presented as if it’s a human being – that it should always be presented as a tool.

I think the truth is we just don’t know yet. But I definitely think this is something that we need to keep an eye on.

Read more: