Letter to the Editor of Reason Magazine

July 8, 2024

Below is a letter to the editor (LTE) regarding Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation in his post on Reason entitled, "The Authoritarian Side of Effective Altruism Comes for AI." You can see Chilson's post here.

********

Dear Editor,

Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading. Contrary to his claims, the Responsible Artificial Intelligence Act (RAAIA) does not broadly regulate benign AI systems like weather forecasting models. It explicitly exempts them from focusing oversight on only the most advanced and potentially dangerous AI systems.

Chilson also falsely asserts that RAAIA would force all open-source AI projects to track their users. The reality is that open-source efforts would be exempt from application fees, and the bill requires regulators to apply safety rules fairly to avoid disadvantaging open approaches. Finally, while Chilson inflates the duration of proposed emergency powers to 6 months, they actually would lapse after 2 months without presidential approval - a reasonable precaution for true AI emergencies.

It's crucial that Congress swiftly passes RAAIA to ensure we have a world-class and commonsense regulatory framework in place before AI systems become too powerful to control. Narrowly targeted rules and emergency backstops will help us harness AI's immense benefits while mitigating catastrophic risks. It's time to act on AI safety before it's too late.

Sincerely,

Jason Green-Lowe

Executive Director

Center for AI Policy (CAIP)

********

Learn more about CAIP's model legislation, the Responsible Advanced Artificial Intelligence Act of 2024 (RAAIA).

The model legislation contains several key policies requiring that AI be developed safely, including permitting, hardware monitoring, civil liability reform, a dedicated government office, and emergency powers.

IASEAI '25: Key Takeaways from the Inaugural AI Safety & Ethics Conference

The summit underscored the deep divisions in how different stakeholders view AI regulation.

Read more

AI Safety Is Becoming AI Security

Does a rose by any other name smell as sweet? 

Read more

New Analysis of AI Agents Highlights a Serious Lack of Safety Oversight

A comprehensive analysis of current AI agents reveals a significant lack of information about safety policies and evaluations.

Read more