Comment on Frontiers in AI for Science, Security, and Technology (FASST) Initiative

November 12, 2024
Read the Full Comment

Response to Department of Energy

On the 12th of September, the Department of Energy’s (DOE) Office of Critical and Emerging Technologies released a request for information on Frontiers in AI for Science, Security, and Technology (FASST) Initiative. The FASST initiative seeks to build the world's most powerful, integrated scientific AI models for scientific discovery, applied energy deployment, and national security applications.

The Center for AI Policy provided responses to Question 3(a), which asked about open sourcing scientific and applied energy AI models, and Question 3(c), which asked about considerations for the DOE’s ongoing AI red-teaming and safety risks.

In response to these questions, we provide the following policy proposals: 

  • Safety testing: Consider both technical risks, which occur without malicious actors, and misuse risk when conducting safety testing.
  • Partner organizations: Work with government agencies (e.g., the AI Safety Institute) and industry partners (e.g., METR) to tailor safety testing to a DOE context.
  • Open sourcing: Adjust Know Your Customer (KYC) requirements and available model components based on the results of safety testing.

Read the full comment here.

Comment on Disclosure of Information Regarding Foreign Obligations

CAIP’s response to the Department of Defense

January 16, 2025
Learn More
Read more

FY 2025's NDAA Is a Valuable Yet Incomplete Accomplishment

CAIP commends Congress for its hard work and timely achievements, but much remains to be done in FY2026

December 23, 2024
Learn More
Read more

Comment on Safety Considerations for Chemical and/or Biological AI Models

CAIP's response to the AI Safety Institute

December 2, 2024
Learn More
Read more