The threat of AI is real. But there is a way to avoid it, this tech expert explains how

AI can do impressive things, but even the experts say it poses a very real threat to us. So should we put a hold on its development or is it already too late?

Photo credit: Getty

Published: August 20, 2023 at 6:00 am

Many of the world’s leading voices in artificial intelligence (AI) have begun to express fears about the technology. They include two of the so-called ‘Godfathers of AI’ – Dr Geoffrey Hinton and Prof Yoshua Bengio, who both played a significant role in its development.

Hinton shocked the AI world in May 2023 by quitting his role as one of Google’s vice-presidents and engineering fellows, citing his concerns about the risks the tech could pose to humanity through the spread of misinformation. He even said he harbours some degree of regret regarding his contributions to the field.

Similarly, Nobel Prize-winning computer scientist Bengio recently told the BBC that he has been surprised by the speed that AI has evolved and felt ‘lost’ when looking back at his life’s work.

Both have called for international regulations to enable us to keep tabs on the development of AI. Unfortunately, due to the fast pace at which the tech develops and the opaque ‘black box’ nature around how much of it operates, it’s much more difficult than it sounds.

Although the potential risks of generative AI, whether it’s bad actors using it for cybercrime or the mass production of misinformation, have become increasingly obvious, what we should do about them has not. One idea seems to be gathering momentum, though: global AI governance.

In an essay, published in The Economist on 18 April, Anka Reuel, a computer scientist at Stanford University, and I proposed the creation of an International Agency for AI. Since then, others have also expressed an interest in the idea. When I again raised the idea during the testimony I gave in the US Senate in May, both Sam Altman, CEO of OpenAI, and several senators seemed open to it.

Later, leaders of three top AI companies sat down with UK prime minister Rishi Sunak to have a similar conversation. Reports from the meeting suggested that they too seemed aligned on the need for international governance. A forthcoming white paper from the United Nations also points in the same direction. Many other people that I’ve spoken to also see the urgency in the situation. My hope is that we’ll be able to convert this enthusiasm into action.

At the same time, I want to call attention to a fundamental tension. We all agree on the need for transparency, fairness and accountability regarding AI, as emphasised by the White House, the Organisation for Economic Co-operation and Development (OECD), the Center for AI and Digital Policy (CAIDP) and the United Nations Educational, Scientific and Cultural Organization (UNESCO). In May, Microsoft even went so far as to directly ratify its commitment to transparency.

But the reality that few people seem to be willing to face is that large language models – the technology underlying the likes of ChatGPT and GPT-4 – are not transparent and are unlikely to be fair.

There is also little accountability. When large language models make errors, it’s unclear why. It’s also unclear whether their makers can be held legally responsible for any errors their AIs make, as the models are black boxes.

When push comes to shove, will the companies behind these AIs stand by their commitments to transparency? I found it disconcerting that Altman briefly threatened to take OpenAI out of Europe if he doesn’t agree with the EU’s AI regulation (although he walked his remarks back a day or two later).

More to the point, Microsoft owns a significant portion of OpenAI’s GPT-4 and uses the tool in its own products (such as Bing), and neither Microsoft nor OpenAI has been fully forthcoming about how Bing or GPT-4 work, or about what data the tools are trained on.

All of which makes mitigating risks extremely difficult. Transparency is, for now, a promise rather than a reality.

Further complexity is added by the fact that there are many risks, not just one. So there won’t be a single, universal solution. Misinformation is different from bias, which is different cybercrime, which is different from the potential long-term risk presented by truly autonomous AI.

Nevertheless, there are steps we can take (see ‘What can we do?’, right) and we should unify as a globe to insist that the AI companies keep the promises they’ve made to be transparent and accountable, and to support the science that we need to mitigate the risks that AI poses.

What can we do?

There are steps we can take now to make developing AI safer…

  • Governments should institute a Medicines and Healthcare/Food and Drug Administration-style approval for large-scale deployment of AI models, in which companies must satisfy regulators (ideally independent scientists) that their products are safe and that the benefits outweigh the risk.
  • Governments should compel AI companies to be transparent about their data and to cooperate with independent investigators.
  • AI companies should provide resources (for example processing time) to allow external audits.
  • We should find ways to incentivise companies to treat AI as a genuine public good, through both carrots and sticks.
  • Create a global agency for AI, which has multiple stakeholders that work together to ensure that the rules governing AI serve the public and not just the AI companies.
  • We should work towards something like a CERN (Conseil Eurpéen pour la Recherche Nucléaire) for AI that’s focused on safety and emphasises: (a) developing new technologies that are better than current technologies at honouring human values, and (b) developing tools and metrics to audit AI, track the risks and helps to directly mitigate those risks.