A Recommendation for the First Meeting of AI Safety Institutes

October 24, 2024

Next month, the United States will host the inaugural convening of the International Network of AI Safety Institutes. The Network consists of AI safety institutes from Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. It was established to promote safe, secure, and trustworthy AI systems by enabling closer collaboration on strategic research. This group has the potential to produce critical insights and shape global policy for safe AI, and its first meeting must set the tone for ensuring it delivers on this promise. 

During the convening, technical experts on AI from each member’s AI safety institute will review the latest developments in AI, seek to align on priority work areas for the Network, and begin advancing global collaboration and knowledge sharing on AI safety. The Center for AI Policy fully supports these goals. However, we recommend one more goal for the Network: reclaim safety as the focus of international conversations on AI. 

World leaders should be focused on AI safety. That’s how a series of international AI summits for heads of state and other stakeholders began in November 2023 with the UK’s AI Safety Summit to identify next steps for the safe development of frontier AI. At the Seoul meeting in May 2024, safety shared the agenda with innovation and inclusivity. And at the upcoming  AI Action Summit in Paris in February 2025, AI safety is expected to be just one of five topics, further diluting its prominence.

The trend lines are clear: even while AI models are becoming more sophisticated and dangerous, safety is receiving a smaller proportion of the international summits’ agendas. While topics like the future of work, innovation, and culture are important and should be thoroughly addressed at the national level, AI safety is unique in its urgent need for a coordinated global response. Without global consensus on AI safety, each nation will have to deal with the risks posed by the lowest common denominator. The rapid development of AI is leading to significant risks that individual nations will struggle to manage alone. 

The Network’s members, with their technical expertise, credibility, and access to government leaders, are uniquely positioned to provide data-driven insights that convince heads of state to elevate safety in their discussions. When the safety institutes convene next month, they must commit to pushing their nations’ leaders with a clear message: without focusing on safety and forming a meaningful framework to address AI risks, AI’s benefits will be built on unstable ground. 

A Potential Force-Multiplier for AI Research Investments

A federal program with potential to support AI explainability research by expanding access to advanced computing infrastructure should be made permanent and fully funded.

Read more

AI Expert Predictions for 2027: A Logical Progression to Crisis

A group of AI researchers and forecasting experts just published their best guess of the near future of AI.

Read more

The Rapid Rise of Autonomous AI

New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.

Read more