On the 11th of September, the Bureau of Industry and Security (BIS) released a proposed rule “Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters”. In line with Executive Order 14410, BIS has proposed a quarterly cadence of reporting the development and safety activities of the most powerful models. Â
The Center for AI Policy (CAIP) supports these reporting requirements and urges Congress to explicitly authorize them. These reporting requirements will offer valuable visibility for BIS over the state and safety of America’s AI industry. Such insight will enable BIS to analyze whether innovation is matching America’s military needs and models are being safety tested before they are released to the wider public.Â
Besides design of the rule itself, sufficient resources and communication between government departments will be crucial to achieving the intent of these reporting requirements. For example, BIS may wish to establish ongoing meetings with representatives of the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) to understand what innovation is relevant to military usage. Â
Although the proposed rule is a step towards AI safety, reporting requirements are no guarantee that companies will act responsibly. Given corporate incentives, companies may rush to develop and release AI models without sufficient safety testing. Powerful but insufficiently tested models may prove deadly when deployed in high stakes critical infrastructure contexts. Similarly, we don’t want malicious actors armed with the capability to develop new pathogens. Plus, current generative AI tendencies towards deception and power seeking are all the more concerning as the autonomy of AI agents increases. Only by shifting corporate incentives, through required safety measures or clarification of liabilities, can we ensure that companies don’t put society at risk with technically faulty or easily misused models.Â
CAIP replied to BIS’s request for comments to help refine the proposed reporting requirements. Below is an executive summary of our full comment.
Thank you for the opportunity to provide feedback on the proposed rule Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters. The Center for AI Policy (CAIP) commends the Bureau of Industry & Security (BIS) on a well-designed process for reporting information. CAIP also strongly agrees with the intended aim of the proposed rule - “to ensure and verify the continuous availability of safe, reliable, and effective AI … including for the national defense and the protection of critical infrastructure”. Leading AI developers plan to build stronger foundation models with capabilities that could pose catastrophic national security risks, while complex safety and security challenges remain unsolved. This unprecedented situation warrants careful, vigilant oversight.
In this response, we share the following feedback on three topics highlighted by BIS.
We also share additional commentary on the following topics
Read the full comment here.
Response to the initial public draft of NIST's guidelines on misuse risk
CAIP calls on House leadership to promptly bring these bills to the floor for a vote
With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed