Below is a letter to the editor (LTE) regarding Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation in his post on Reason entitled, "The Authoritarian Side of Effective Altruism Comes for AI." You can see Chilson's post here.
********
Dear Editor,
Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading. Contrary to his claims, the Responsible Artificial Intelligence Act (RAAIA) does not broadly regulate benign AI systems like weather forecasting models. It explicitly exempts them from focusing oversight on only the most advanced and potentially dangerous AI systems.
Chilson also falsely asserts that RAAIA would force all open-source AI projects to track their users. The reality is that open-source efforts would be exempt from application fees, and the bill requires regulators to apply safety rules fairly to avoid disadvantaging open approaches. Finally, while Chilson inflates the duration of proposed emergency powers to 6 months, they actually would lapse after 2 months without presidential approval - a reasonable precaution for true AI emergencies.
It's crucial that Congress swiftly passes RAAIA to ensure we have a world-class and commonsense regulatory framework in place before AI systems become too powerful to control. Narrowly targeted rules and emergency backstops will help us harness AI's immense benefits while mitigating catastrophic risks. It's time to act on AI safety before it's too late.
Sincerely,
Jason Green-Lowe
Executive Director
Center for AI Policy (CAIP)
********
Learn more about CAIP's model legislation, the Responsible Advanced Artificial Intelligence Act of 2024 (RAAIA).
The model legislation contains several key policies requiring that AI be developed safely, including permitting, hardware monitoring, civil liability reform, a dedicated government office, and emergency powers.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business