Two new documents from the US Senate aim to steer the future of AI safety - the "Future of AI Innovation Act" and a framework introduced by Senators Romney, Reed, Moran, and King.
The documents suggest implementing safeguards and oversight mechanisms for high-risk AI systems to prevent their exploitation by foreign adversaries and bad actors.
The documents call for testing and evaluating potential AI risks, including threats to critical infrastructure, energy security, and weapon development.
While the documents are a step forward for AI safety, they do not fully satisfy the public's demand for effective regulation, as they do not create a dedicated AI regulator.
Instead of a dedicated regulator, the documents propose expanding the responsibilities of NIST (National Institute of Standards and Technology), which is counterproductive as NIST is committed to voluntary standards and not interested in a regulatory role.
The full memo can be accessed here.
Leading research institutions showcased real-time AI threats.
The reported plans pose an alarming threat to our nation's ability to develop effective and responsible AI.
"The discussions at this tabletop exercise should be a wake-up call for government officials: the threat from AI is outpacing our preparedness."