On July 26th, NIST’s AI Safety Institute released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security. These guidelines formed part of five products that the Department of Commerce announced in response to and on the 270-day mark since President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI.Â
The Center for AI Policy replied to NIST’s request for comments to help shape these rules effectively. Below is an executive summary of our full comment.
The Center for AI Policy (CAIP) appreciates the opportunity to provide feedback on the NIST AI 800-1 Draft. CAIP commends NIST for the thorough and thoughtful best practices on how to map, measure, manage, and govern misuse risks from foundation AI models. AI safety is a pressing regulatory challenge, and this document represents an important step forward in encouraging companies to manage these risks. In particular, measures such as risk thresholds, whistleblowing requirements, and safe harbors demonstrate that these practices have incorporated the most recent research on AI safety.Â
CAIP has three categories of feedback to share with NIST. First, feedback on the overall process design, such as clarifying the target audience for documentation and specifying the timing of practices with a roadmap addendum. Second, refinements of the proposed best practices to better achieve each objective. Third, minor grammatical and language modifications for clarity. Through each of these recommendations, CAIP hopes to support NIST’s work and help protect against catastrophic risks. Â
Finally, CAIP acknowledges that the scope of this document was limited to misuse risk and that the practices outlined met the intended aim. However an equally important and parallel AI safety concern is misalignment risk. In future, CAIP encourages NIST to develop similar guidelines or expand the misuse guidelines to address misalignment risk. Â
You can read the full comment here.Â
CAIP supports these reporting requirements and urges Congress to explicitly authorize them
CAIP calls on House leadership to promptly bring these bills to the floor for a vote
With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed