Hickenlooper on AI Auditing Standards

Jason Green-Lowe
,
June 13, 2024

Participating in the Punchbowl Pop-Up Conversation on June 12, Senator John Hickenlooper (D-CO) reiterated his call for qualified third parties to effectively audit AI systems and verify their compliance with federal laws and regulations.

Jake Sherman of Punchbowl News asked the Senator to explain his February 2024 keynote speech at the Silicon Flatirons Flagship Conference, in which he proposed AI auditing standards and laid out a framework for AI regulation.

Sherman asked, specifically, "What kind of third party do you foresee being able to oversee AI?" Senator Hickenlooper responded, "Well, I mean, it's got to be somebody." He went on, "It's great to let people say, oh well, you're going to self-regulate yourself, right? It's like letting you know high school kids grade their own papers." You can see the exchange here.

The Center for AI Policy (CAIP) agrees with Senator Hickenlooper. Qualified third parties to audit AI systems are needed, and this commonsense requirement should be passed into law. Although some AI companies take the initiative to assess their own risks, independent experts must audit all Generative AI systems and ensure they adhere to federal safety standards. This policy is an important part of CAIP's model legislation, which you can learn more about here.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more