Two new documents from the US Senate aim to steer the future of AI safety - the "Future of AI Innovation Act" and a framework introduced by Senators Romney, Reed, Moran, and King.
The documents suggest implementing safeguards and oversight mechanisms for high-risk AI systems to prevent their exploitation by foreign adversaries and bad actors.
The documents call for testing and evaluating potential AI risks, including threats to critical infrastructure, energy security, and weapon development.
While the documents are a step forward for AI safety, they do not fully satisfy the public's demand for effective regulation, as they do not create a dedicated AI regulator.
Instead of a dedicated regulator, the documents propose expanding the responsibilities of NIST (National Institute of Standards and Technology), which is counterproductive as NIST is committed to voluntary standards and not interested in a regulatory role.
The full memo can be accessed here.
CAIP looks forward to working with the Trump administration to promote common-sense AI policies
This legislation buys us a great deal of security at little or no cost to innovation
However, more work will be needed to develop appropriate laws