CAIP work
Advising Policymakers
We’re working with Congress and federal agencies to help them understand advanced AI development and effectively prepare for it. We create resources, host events, and connect policymakers with the stakeholders they need to hear from.
We don't just talk about risks. We develop and advocate for solutions.
We share policy proposals, draft model legislation, and give feedback on others' policies. This work is collaborative and iterative. We take in ideas from our network of leading researchers and practitioners to make recommendations that are both robust and practical.
FY 2025's NDAA Is a Valuable Yet Incomplete Accomplishment
CAIP commends Congress for its hard work and timely achievements, but much remains to be done in FY2026
CAIP Statement on Michael Kratsios and Sriram Krishnan Being Named to Key White House Technology Roles
CAIP looks forward to working with the Trump administration to promote common-sense AI policies
CAIP Applauds the Romney-Led, Bipartisan Bill to Address Catastrophic AI Risks
This legislation buys us a great deal of security at little or no cost to innovation
CAIP priorities
Our policy mission is simple:
require safe AI.
To ensure powerful AI is safe, we need effective governance. That’s why our policy recommendations focus on ensuring the government has enough:
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to support developers in innovating safely
Our Priorities
This work is collaborative and iterative. We take in ideas and feedback from our network of leading researchers and practitioners to make our recommendations both robust and practical.
- Build government capacity
- Safeguard development
- Mitigate extreme risk
As AI grows more capable, so do its risks. We must prepare governance now to keep pace. We are advocating policies to ensure the government has enough:
Visibility and expertise to understand AI development
Adeptness and authority to respond to rapidly evolving risks
Infrastructure to work with rather than against developers
- Visibility and expertise to understand AI development
- Adeptness and authority to respond to rapidly evolving risks
- Infrastructure to work with rather than against developers
Frequently asked questions
With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.