US Senators Demand AI Safety Disclosure From OpenAI

Jason Green-Lowe
,
July 23, 2024

Center for AI Policy applauds Senate action, calls for comprehensive AI safety legislation

WASHINGTON, DC – Today, five US Senators demanded that OpenAI detail efforts to make its AI safe. This action comes in response to recent allegations from company staff regarding rushed safety tests for OpenAI's latest AI model. Jason Green-Lowe, Executive Director of the Center for AI Policy (CAIP), a nonpartisan advocacy group urging Congress to require safe AI, said the following:

"The Center for AI Policy commends Senators Schatz, Luján, Welch, Warner, and King for demanding that OpenAI disclose information about its AI safety and security measures in response to staff claims that the company haphazardly rushed safety tests for its newest AI model.

This development highlights the growing concern over how to achieve AI safety without comprehensive federal legislation. Green-Lowe emphasized this point: "Sadly, in the absence of commonsense AI laws from Congress, the American public has relied mainly on voluntary commitments to create safe and trustworthy AI systems, which have not always been honored."

Recent media reports and a letter to the Securities and Exchange Commission from OpenAI whistleblowers have cast doubt on the company's approach to emerging AI safety concerns. These revelations have intensified the need for greater transparency and accountability in the AI industry.

"With the absence of federal AI laws, it is appropriate for the US Senate to seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats," Green-Lowe added.

The Center for AI Policy stresses the importance of public disclosure and proactive safety measures. "Americans need to know that these leading AI companies are taking proactive steps to safeguard their systems and data and that they promptly disclose any breaches or vulnerabilities to the public," Green-Lowe said.

While acknowledging the positive step taken by the US Senate, CAIP maintains that more comprehensive action is necessary. Green-Lowe concluded, "While US Senate oversight is a positive step forward, it is not enough. The Center for AI Policy calls on Congress to take action and establish a comprehensive regulatory framework for AI in the United States. For economic vitality and national security, the United States should be the leader in safe AI and not a follower. It is time for Congress to pass legislation requiring safe AI."

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with effective safety standards.

More @ aipolicy.us

###

Center for AI Policy (CAIP) Congressional Endorsements for Election 2024

CAIP conducted a survey of notable races this election cycle

October 30, 2024
Learn More
Read more

CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics

The "godfather of AI" is concerned about the dangers of the technology he helped create

October 8, 2024
Learn More
Read more

Politico: Gavin Newsom and Silicon Valley Quash AI Safety Effort

CAIP was featured in Politico's coverage of the SB 1047 veto decision

October 2, 2024
Learn More
Read more