Comment on Frontiers in AI for Science, Security, and Technology (FASST) Initiative

Claudia Wilson
,
November 12, 2024
Read the Full Comment

Response to Department of Energy

On the 12th of September, the Department of Energy’s (DOE) Office of Critical and Emerging Technologies released a request for information on Frontiers in AI for Science, Security, and Technology (FASST) Initiative. The FASST initiative seeks to build the world's most powerful, integrated scientific AI models for scientific discovery, applied energy deployment, and national security applications.

The Center for AI Policy provided responses to Question 3(a), which asked about open sourcing scientific and applied energy AI models, and Question 3(c), which asked about considerations for the DOE’s ongoing AI red-teaming and safety risks.

In response to these questions, we provide the following policy proposals: 

  • Safety testing: Consider both technical risks, which occur without malicious actors, and misuse risk when conducting safety testing.
  • Partner organizations: Work with government agencies (e.g., the AI Safety Institute) and industry partners (e.g., METR) to tailor safety testing to a DOE context.
  • Open sourcing: Adjust Know Your Customer (KYC) requirements and available model components based on the results of safety testing.

Read the full comment here.

Comment on Bolstering Data Center Growth, Resilience, and Security

CAIP's response to the National Telecommunications and Information Administration (NTIA)

November 4, 2024
Learn More
Read more

Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

CAIP supports these reporting requirements and urges Congress to explicitly authorize them

October 8, 2024
Learn More
Read more

CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

Response to the initial public draft of NIST's guidelines on misuse risk

September 16, 2024
Learn More
Read more