Elon Musk brought the glitz to the AI safety summit.

Rishi Sunak has hailed this week’s artificial intelligence summit as a diplomatic breakthrough after it produced an international declaration to address risks with the technology.

Here are five things we have learned from the summit.

The UK pulled off a diplomatic coup

The prime minister spent diplomatic capital convening global leaders, tech executives, academics and civil society figures at Bletchley Park in Milton Keynes, the base for second world war codebreakers.

Those attending included the US vice-president, Kamala Harris, the European Commission president, Ursula von der Leyen, award-winning computer scientists, executives at all the leading AI companies – and Elon Musk.

Even if Emmanuel Macron and Joe Biden were among the no-shows, the gathering had political and commercial heft.

Tino Cuéllar, the president of the Carnegie Endowment for International Peace, said it was a “remarkable achievement” in diplomatic terms. This was symbolised by the signing of an international declaration that recognised the need to address risks represented by AI development, backed by more than 25 countries and the EU. France will host the next full-blown AI safety summit in 2024 – guaranteeing that Sunak’s initiative will live on.

The US is a hard power in AI

The White House made clear its power to set the AI agenda this week. Biden issued an executive order requiring tech firms to submit test results for powerful AI systems to the government before they are released to the public. Harris gave a speech about AI in London on day one of the summit and announced the establishment of an AI safety institute, echoing a similar announcement by Biden last week.

The UK tech secretary, Michelle Donelan, said she was unfazed by the US initiatives, pointing to the fact the majority of cutting-edge AI companies, such as the ChatGPT developer OpenAI, are based in the US. That remark underlined the fact that the US has commercial, as well as political, strength in AI.

Elon Musk has the star power

Elon Musk brought the glitz to the AI safety summit. Photograph: Reuters

The world’s richest man attended the summit and brought glitz, as well as distraction, to the gathering. Sunak invited him to Downing Street for a streamed fireside chat and the Tesla CEO’s warning that AI was “one of the biggest threats to humanity” overshadowed more nuanced contributions.

Some guests viewed the attendance of Musk, whose nascent AI venture xAI is nowhere near the scale of the bigger players, as an example of Fomo (fear of missing out). But his presence brought extra attention to the summit.

Existential risk is divisive, but short-term risk is not

The possibility that AI can wipe out humanity – a view held by less hyperbolic figures than Musk – remains a divisive one in the tech community. That difference of opinion was not healed by two days of debate in Buckinghamshire.

But if there is a consensus on risk among politicians, executives and thinkers, then it focuses on the immediate fear of a disinformation glut. There are concerns that elections in the US, India and the UK next year could be affected by malicious use of generative AI.

Nick Clegg, president of global affairs at Mark Zuckerberg’s Meta, said this week that existential fears were being overplayed but he was concerned about the immediate threat to democratic polls. “We have some things which we need to deal with now,” he said.

Countries are moving at their own speeds

Every delegation at Bletchley was keen to claim preeminence in AI regulation, from European diplomats noting they had started the regulatory process four years ago to Americans talking up the power of their new AI safety institute.

While the EU is moving closer to passing its AI Act, UK officials have made clear they do not think regulation is needed, or even possible at this stage given how fast the industry is moving.

But most agree on the importance of international summits such as this one, not least to help define the problem different countries are trying to tackle.

One official said: “What we need most of all from the international stage is a panel like the International Panel on Climate Change, which at least establishes a scientific consensus about what AI models are able to do.”

Share This Article