CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

September 16, 2024

Response to NIST AI 800-1 Draft: Managing Misuse Risk for Dual-Use Foundation Models

On July 26th, NIST’s AI Safety Institute released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security. These guidelines formed part of five products that the Department of Commerce announced in response to and on the 270-day mark since President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI. 

The Center for AI Policy replied to NIST’s request for comments to help shape these rules effectively. Below is an executive summary of our full comment.

Executive Summary

The Center for AI Policy (CAIP) appreciates the opportunity to provide feedback on the NIST AI 800-1 Draft. CAIP commends NIST for the thorough and thoughtful best practices on how to map, measure, manage, and govern misuse risks from foundation AI models. AI safety is a pressing regulatory challenge, and this document represents an important step forward in encouraging companies to manage these risks. In particular, measures such as risk thresholds, whistleblowing requirements, and safe harbors demonstrate that these practices have incorporated the most recent research on AI safety. 

CAIP has three categories of feedback to share with NIST. First, feedback on the overall process design, such as clarifying the target audience for documentation and specifying the timing of practices with a roadmap addendum. Second, refinements of the proposed best practices to better achieve each objective. Third, minor grammatical and language modifications for clarity. Through each of these recommendations, CAIP hopes to support NIST’s work and help protect against catastrophic risks.  

Finally, CAIP acknowledges that the scope of this document was limited to misuse risk and that the practices outlined met the intended aim. However an equally important and parallel AI safety concern is misalignment risk. In future, CAIP encourages NIST to develop similar guidelines or expand the misuse guidelines to address misalignment risk.  

You can read the full comment here

Comment on AISI's Second Draft: Managing Misuse Risk for Dual-Use Foundation Models

CAIP's feedback includes five proposed changes to strengthen the final document.

Read more

CAIP Letter to OMB Supports AI Testing in Government Procurement

CAIP shared suggestions regarding the implementation of President Trump's Executive Order 14179.

Read more

Response to OSTP RFI: Items to Include in the Trump 2025 AI Action Plan

Bold national leadership can steer AI in a direction that promotes human flourishing.

Read more