Model Legislation: Responsible AI Act (RAIA)

April 30, 2025

Welcome to the Center for AI Policy! We’re working with Congress and federal agencies to help them understand advanced AI development and effectively prepare for the catastrophic risks that AI could pose. Below is our model legislation, the Responsible Artificial Intelligence Act of 2025 (RAIA). The model legislation contains several key policies for requiring that AI be developed safely, including permitting based on independent private audits, hardware monitoring, civil liability reform, a dedicated government office, and emergency powers.

We hope that Congressional offices and others interested in AI policy will be able to use part or all of this model legislation to inform their approach to AI safety. We are available to meet with stakeholders to explain the reasoning behind these policies and to help adapt portions of the model legislation to meet different stakeholders' needs -- please contact our Executive Director, Jason Green-Lowe, at jason@aipolicy.us to learn more. Together, we can create a world where AI is safe enough that we can enjoy its benefits without undermining humanity's future.

Read a two-page executive summary here, a section-by-section explainer here, the full model legislation here, a cost estimate here, and a policy brief explaining our reasoning here.

Center for AI Policy Unveils Model Legislation to Regulate Frontier AI Systems

The Responsible AI Act of 2025 establishes critical testing requirements for advanced AI systems.

Read more

Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

Announcing our proposal for the "Responsible Advanced Artificial Intelligence Act of 2024"

Read more