UK to host global AI summit addressing technology’s significant risks
This autumn, the UK government will host a global artificial intelligence (AI) summit aimed at assessing the technology’s most significant risks. With growing concerns about the potential existential threat AI poses to humanity, regulators worldwide are working to establish new rules to mitigate these risks. Prime Minister Rishi Sunak expressed his desire for the UK to lead efforts in ensuring AI benefits are harnessed for the greater good of humanity.
Sunak emphasized the importance of developing and utilizing AI in a safe and secure manner. The summit will bring together key countries, leading tech companies, and researchers to discuss and agree on safety measures to evaluate and monitor the most significant risks associated with AI. The Prime Minister is currently discussing the issue with President Biden in Washington DC, where he stated that the UK is the “natural place” to take the lead in AI conversations.
Downing Street highlighted the Prime Minister’s recent meetings with AI firm leaders and the 50,000 people employed in the sector, which is worth £3.7bn to the UK. However, some question the UK’s leadership credentials in this field. Yasmin Afina, a research fellow at Chatham House’s Digital Society Initiative, suggested that the UK should focus on promoting responsible behaviour in AI research, development, and deployment rather than taking on an overly ambitious role.
Interest in AI has significantly increased since chatbot ChatGPT emerged last November, demonstrating its ability to answer complex questions in a human-like manner. This has raised concerns due to the immense computational power AI systems possess. AI industry leaders, including the heads of OpenAI and Google Deepmind, have warned that AI could lead to the extinction of humanity, citing potential uses like the development of new chemical weapons.
These warnings have led to a push for effective AI regulation, with the European Union working on an Artificial Intelligence Act and collaborating with the US on a voluntary code for the sector. China has also been proactive in drafting AI regulations, proposing that companies must notify users when an AI algorithm is being used. The UK government presented its ideas in a White Paper in March, which received criticism for having “significant gaps.”
Despite the UK’s potential struggle to be as influential as the EU and China in AI regulation, Matt O’Shaughnessy, visiting fellow at the Carnegie Endowment for International Peace, highlighted the UK’s role as an academic and commercial hub with institutions known for their work on responsible AI. He believes that this positions the UK as a serious player in the global discussion about AI.