FOR IMMEDIATE RELEASE
WASHINGTON, DC, April 3 – In a pivotal step towards ensuring the safe evolution and use of artificial intelligence (AI), the United States and the United Kingdom have signed a groundbreaking Memorandum of Understanding (MOU). This collaborative effort establishes a foundation for developing robust AI safety measures.
"The rapid development of AI technologies presents a future filled with immense possibilities but also significant risks," stated Jason Green-Lowe, Executive Director of the Center for AI Policy (CAIP). "This MOU represents a crucial acknowledgment by two of the world's leading AI nations to systematically address these concerns by pooling resources and expertise."
This historic agreement, signed by UK Science Minister Michelle Donelan and US Commerce Secretary Gina Raimondo, outlines plans for sharing technical knowledge, exchanging vital safety information, and building up government talent to navigate AI's complexity. As AI continues to advance, the nature and magnitude of its risks remain uncertain, necessitating a cooperative and proactive approach to safety testing and risk assessment.
"Governments have a responsibility to ensure that as AI systems become more sophisticated, their deployment within society is preceded by rigorous safety testing," Green-Lowe emphasized. "The commitment from the US and UK sets a precedent for international cooperation on a scale that reflects the global reach of AI technology."
With the establishment of specialized AI safety institutes in the UK and Japan and forthcoming in the US, this MOU further solidifies a framework for sustained collaboration. These national safety institutes deepen the understanding of potential AI threats, identify safety standards, and facilitate international policy.
"In an age where technology governance grows increasingly complex, agreements such as this one are essential to a safer digital future," Green-Lowe added. "CAIP supports this US-UK collaboration and welcomes the MOU as a positive move towards increased AI safety."
The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
###
CAIP conducted a survey of notable races this election cycle
The "godfather of AI" is concerned about the dangers of the technology he helped create
CAIP was featured in Politico's coverage of the SB 1047 veto decision