Should we be worried about AI?

The threat of malevolent machines with monstrous artificial intelligence dominating humanity is imaginary, says Luciano Floridi.

Published: February 17, 2017 at 12:00 am

Suppose you enter a dark room in an unknown building. You may panic about some potential monsters lurking in the dark. Or just turn on the light, to avoid painfully bumping into the furniture. The dark room is the future ofartificial intelligence(AI). Unfortunately, there are people who believe that, as we step into the room, we may run into some evil, ultra-intelligent machines. Fear of some kind of ogre, such as a Golem or a Frankenstein’s monster, is as old as human memory. The computerised version of such fear dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

"Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously."

Once ultra-intelligent machines become a reality, they may not be docile at all but enslave us as a subspecies, ignore our rights and pursue their own ends, regardless of the effects that this has on our lives. If this sounds too incredible to be taken seriously, fast-forward half a century and the amazing developments in our digital technologies have led many people to believe that Good’s “intelligence explosion”, sometimes known as Singularity, may be a serious risk, and that the end of our species may be near if we are not careful.

Stephen Hawking, for example, has stated: “I think the development of full artificial intelligence could spell the end of the human race.” Yet this is as correct as the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble. The problem is with the premise. Bill Gates, the founder of Microsoft, is equally concerned:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

And this is what Elon Musk, CEO ofTesla, a US carmaker, said:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Just in case you thought predictions by experts were a reliable guide, think again. There are many staggeringly wrong technological forecasts by great experts. For example, in 2004 Gates predicted: “Two years from now, spam will be solved.” And Musk speculates that “the chance thatwe are not living in a computer simulation is one in billions”. That is, you are not real; you are reading this within the Matrix. Literally.

The reality is more trivial. Current and foreseeable smart technologies have the intelligence of an abacus: that is, zero. The trouble is always human stupidity or evil nature. On March 24th 2016 Microsoft introduced Tay, an AI-based chat robot, to Twitter. The company had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, itquickly becamean evil, Hitler-loving, Holocaust-denying, incestual-sex-promoting, “Bush did 9/11”-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the tricky and nasty messages sent to it. Microsoft had to apologise.

This is the state of AI today, and for any realistically foreseeable future. Computers still fail to find printers that are right there, next to them. Yet the fact that full AI is science fiction is not a reason to be complacent. On the contrary, after so much distracting and irresponsible speculation about the fanciful risks of ultra-intelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual and serious challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

Pushing the envelope

One fundamental point needs to be understood to clarify such challenges. The success of AI is largely due to the fact that we are building an AI-friendly environment, in which smart technologies find themselves at home and we are more like scuba divers. It is the world that is adapting to AI not vice versa. Let’s see what this means.

In industrial robotics, the three-dimensional space that defines the boundaries within which a robot can work successfully is defined as the robot’s envelope. We do not build droids likeStar Wars’ C3PO to wash dishes in the sink exactly in the same way as we would. We envelop environments around simple robots to fit and exploit their limited capacities and still deliver the desired output. A dishwasher accomplishes its task because its environment is structured (“enveloped”) around its simple capacities. The same applies toAmazon’s robotic shelves, for example. It is the environment that is designed to be robot-friendly.Driverless carswill become a commodity the day we can envelop the environment around them.

Enveloping used to be either a stand-alone phenomenon (you buy the robot with the required envelope, like a dishwasher or a washing machine) or implemented within the walls of industrial buildings, carefully tailored around their artificial inhabitants. Nowadays, enveloping the environment into an AI-friendly infosphere has started to pervade all aspects of reality and is visible daily everywhere, in the house, in the office and in the street. Indeed, we have been enveloping the world around digital technologies for decades without fully realising it.

In the 1940s and 1950s, the computer was a room and Alice used to walk inside it to work with it. Programming meant using a screwdriver. Human–computer interaction was like a somatic or physical relationship. In the 1970s, Alice’s daughter walked out of the computer, to step in front of it. Human–computer interaction became a semantic relationship, later facilitated by DOS (disk operating system) and lines of text, GUI (graphical user interface) and icons. Today, Alice’s granddaughter has walked inside the computer again, in the form of a whole infosphere that surrounds her, often imperceptibly. Human–computer interaction has become somatic again, with touchscreens, voice commands, listening devices, gesture-sensitive applications, proxy data for location, and so forth.

In such an AI-friendly infosphere, we are regularly asked to prove that we are humans by clicking on so-calledCAPTCHA(the Completely Automated Public Turing test to tell Computers and Humans Apart). The test is represented by slightly altered strings of letters, possibly mixed with other bits of graphics, that we have to decipher to prove that we are a human not an artificial agent, for instance when registering for a new account on Wikipedia. Sometimes it is simply a box stating: “I’m not a robot.” Software programs cannot click on it because they do not understand the message; humans find the task trivial.

Every day there are more humans online, more documents, more tools, more devices that communicate with each other, more sensors, more RFID tags, more satellites, more actuators, more data: in a word, more enveloping. And more jobs and activities are becoming digitalin nature: playing, educating, entertaining, dating, meeting, fighting, caring, gossiping, advertising. We do all this and more in an enveloped infosphere where we are more analogue guests than digital hosts. This is good news for the future of AI and smart technologies in general. They will be exponentially more useful and successful with every step we take in the expansion of the infosphere. After all, they are the real digital natives. However, enveloping the world is a process that raises significant problems. Some, like the digital divide, are well known and obvious; others are more subtle.

A marriage made in the infosphere

Imagine two people A and H. They are married and they really wish to make their relationship work. A, who does increasingly more in the house, is inflexible, stubborn, intolerant of mistakes and unlikely to change. H is just the opposite, but is also becoming progressively lazier and dependent on A. The result is an unbalanced situation, in which A ends up shaping the relationship and distorting H’s behaviour, in practice if not on purpose. If the marriage works, it is because it is carefully tailored around A.

In this analogy, AI and smart technologies play the role of A, whereas their human users are clearly H. The risk we are running is that, by enveloping the world, our technologies might shape our physical and conceptual environments and constrain us to adjust to them because that is the best or easiest – or indeed sometimes the only – way to make things work. After all, AI is the stupid but laborious spouse and humanity the intelligent but lazy one, so who is going to adapt to whom, given that divorce is not an option? You will probably recall many episodes in real life when something could not be done at all, or had to be done in a cumbersome or silly way, because that was the only way to make the computerised system do what it had to do. “Computer Says No”, as the character Carol Beer in the UK comedy sketch showLittle Britainwould reply to any customer’s request.

What really matters is that the increasing presence of ever-smarter technologies in our lives is having huge effects on how we think of ourselves and the world, as well as on our interactions among ourselves and with the world. The point is not that our machines are conscious,or intelligent, or able to understand or know something as we do. They are not.

There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes/no answer. We know that our computational machines satisfy the “Curry–Howard correspondence”, for example, which indicates that proof systems in logic on the one hand, and the models of computation on the other, are structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including beating us at board games like chequers, chess and Go and the quiz showJeopardy!The sky is the limit. And yet, they are all versions of a Turing machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic. Quantum computers, too, are constrained by the same limits of what can be subject to computation (so-called computable functions). No conscious, intelligent, intentional entity is going to emerge magically from a Turing machine.

The point is that our smart technologies, also thanks to the enormous amount of available data, some highly sophisticated programming and the fact that they can smoothly interact with each other (think of your digital diary synchronised across various platforms and supports), are increasingly able to deal with more and more tasks better than we do, including predicting our behaviour. So we are not the only agents able to perform tasks successfully, far from it. This is what I have defined as “the fourth revolution” in our self-understanding. We are not at the centre of the universe (Copernicus), of the biological kingdom (Darwin), or of the realm of rationality (Freud). After Turing, we are no longer at the centre of the infosphere, the world of information processing and smart agency, either. We share the infosphere with digital technologies.

These are not the children of some sci-fi ultra-intelligence, but ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us re-evaluate our human exceptionality and our special role in the universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a chess master. We thought we were free because we could buy whatever we wished. Nowour spending patterns are predicted, sometimes even anticipated, by devices as thick as a plank.

What does all this mean for our self-understanding? The success of our technologies largely depends on the fact that, while we were speculating about the possibility of ultra-intelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies could replace us without having any understanding, intentions, interpretations, emotional states, semantic skills, consciousness, self- awareness or flexible intelligence (as in seeing a shoe as a hammer for a nail). Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

So smart technologies are better at accomplishing tasks, but this should not be confused with being better at thinking. Digital technologies do not think, let alone think better than us, but they can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations, so-called machine learning.AlphaGo, a computer program developed by Google DeepMind, won the board game Go against Lee Sedol, the world’s best player, because it could use a database of around 30 million moves and play thousands of games against itself, “learning” each time a bit more about how to improve its performance. It is like a two-knife system that can sharpen itself. Yet think for a moment what would have happened if the fire alarm had gone off during the match: Sedol would have immediately stopped and walked away, while AlphaGo would have been calculating the next move in the game.

So what’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded. The serious risk is not the appearance of some ultra-intelligence, but that we may misuse our digital technologies, to the detriment of a large percentage of humanity and the whole planet.

Beware of humans

We are and shall remain for any foreseeable future the problem, not our technology. This is why we should turn on the light in the dark room and watch carefully where we are going. There are no monsters, but plenty of obstacles to avoid, remove or negotiate. We should be worried about real human stupidity, not imaginary artificial intelligence. The problem is notHALbut H.A.L. – humanity at large.

Thus we should rather concentrate on the real challenges. By way of conclusion, I will list five of them, all equally important.

  • We should make AI environment-friendly. We need the smartest technologies we can build to tackle the very concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war to famine, poverty, ignorance, inequality and appalling living standards. For example, more than 780 million people do not have access to clean water and almost 2.5 billion do not have access to adequate sanitation. Some 6–8 million people die annually from the consequences of disasters and water-related diseases. This, not AI, is among “our biggest existential threats”.
  • We should make AI human-friendly. AI should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.
  • We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created. The benefits of this transformation should be shared by all, and the costs borne by society, because never before have so many people undergone such a radical and fast transformation. The agricultural revolution took millennia to exert its full impact on society, the Industrial Revolution took centuries, but the digital one took only a few decades. No wonder we feel confused and wrong-footed.
  • We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviours, nudging people, or fighting crime and terrorism should never undermine human dignity.
  • Finally, we should make AI make us more human. The serious risk is that we may misuse our smart technologies. Winston Churchill once said that “we shape our buildings and afterwards our buildings shape us”. This applies to the infosphere and the smart technologies inhabiting it as well. We’d better get them right, now.
This excerpt is from Megatech: Technology in 2050, a collection of essays exploring the ideas, inventions and trends that will shape our future, with contributors including Prof Frank Wilczek, Melinda Gates and Alastair Reynolds. The book is out now (Profile Books, £15)
This excerpt is from Megatech: Technology in 2050, a collection of essays exploring the ideas, inventions and trends that will shape our future, with contributors including Prof Frank Wilczek, Melinda Gates and Alastair Reynolds. The book is out now (Profile Books, £15)