A Recommendation for the First Meeting of AI Safety Institutes

Mark Reddish
,
October 24, 2024

Next month, the United States will host the inaugural convening of the International Network of AI Safety Institutes. The Network consists of AI safety institutes from Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. It was established to promote safe, secure, and trustworthy AI systems by enabling closer collaboration on strategic research. This group has the potential to produce critical insights and shape global policy for safe AI, and its first meeting must set the tone for ensuring it delivers on this promise. 

During the convening, technical experts on AI from each member’s AI safety institute will review the latest developments in AI, seek to align on priority work areas for the Network, and begin advancing global collaboration and knowledge sharing on AI safety. The Center for AI Policy fully supports these goals. However, we recommend one more goal for the Network: reclaim safety as the focus of international conversations on AI. 

World leaders should be focused on AI safety. That’s how a series of international AI summits for heads of state and other stakeholders began in November 2023 with the UK’s AI Safety Summit to identify next steps for the safe development of frontier AI. At the Seoul meeting in May 2024, safety shared the agenda with innovation and inclusivity. And at the upcoming  AI Action Summit in Paris in February 2025, AI safety is expected to be just one of five topics, further diluting its prominence.

The trend lines are clear: even while AI models are becoming more sophisticated and dangerous, safety is receiving a smaller proportion of the international summits’ agendas. While topics like the future of work, innovation, and culture are important and should be thoroughly addressed at the national level, AI safety is unique in its urgent need for a coordinated global response. Without global consensus on AI safety, each nation will have to deal with the risks posed by the lowest common denominator. The rapid development of AI is leading to significant risks that individual nations will struggle to manage alone. 

The Network’s members, with their technical expertise, credibility, and access to government leaders, are uniquely positioned to provide data-driven insights that convince heads of state to elevate safety in their discussions. When the safety institutes convene next month, they must commit to pushing their nations’ leaders with a clear message: without focusing on safety and forming a meaningful framework to address AI risks, AI’s benefits will be built on unstable ground. 

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more