CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

September 16, 2024

Response to NIST AI 800-1 Draft: Managing Misuse Risk for Dual-Use Foundation Models

On July 26th, NIST’s AI Safety Institute released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security. These guidelines formed part of five products that the Department of Commerce announced in response to and on the 270-day mark since President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI. 

The Center for AI Policy replied to NIST’s request for comments to help shape these rules effectively. Below is an executive summary of our full comment.

Executive Summary

The Center for AI Policy (CAIP) appreciates the opportunity to provide feedback on the NIST AI 800-1 Draft. CAIP commends NIST for the thorough and thoughtful best practices on how to map, measure, manage, and govern misuse risks from foundation AI models. AI safety is a pressing regulatory challenge, and this document represents an important step forward in encouraging companies to manage these risks. In particular, measures such as risk thresholds, whistleblowing requirements, and safe harbors demonstrate that these practices have incorporated the most recent research on AI safety. 

CAIP has three categories of feedback to share with NIST. First, feedback on the overall process design, such as clarifying the target audience for documentation and specifying the timing of practices with a roadmap addendum. Second, refinements of the proposed best practices to better achieve each objective. Third, minor grammatical and language modifications for clarity. Through each of these recommendations, CAIP hopes to support NIST’s work and help protect against catastrophic risks.  

Finally, CAIP acknowledges that the scope of this document was limited to misuse risk and that the practices outlined met the intended aim. However an equally important and parallel AI safety concern is misalignment risk. In future, CAIP encourages NIST to develop similar guidelines or expand the misuse guidelines to address misalignment risk.  

You can read the full comment here

Comment on Disclosure of Information Regarding Foreign Obligations

CAIP’s response to the Department of Defense

January 16, 2025
Learn More
Read more

FY 2025's NDAA Is a Valuable Yet Incomplete Accomplishment

CAIP commends Congress for its hard work and timely achievements, but much remains to be done in FY2026

December 23, 2024
Learn More
Read more

Comment on Safety Considerations for Chemical and/or Biological AI Models

CAIP's response to the AI Safety Institute

December 2, 2024
Learn More
Read more