OpenAI employees filed a complaint with the SEC. Regardless of the outcome, they still need stronger whistleblower protections.â
Two weeks ago, OpenAI whistleblowers filed a complaint claiming that the tech behemoth had violated SEC regulations with overly restrictive NDAs and employment agreements. They alleged that these agreements prohibited employees from âcommunicating concerns to the SEC about securities violations, forced employees to waive their rights to whistleblower compensation, and required employees to notify the company of communication with government regulatorsâ.
These are serious allegations. Whistleblowers are one of the key accountability mechanisms for AI companies as there is no government enforcement of their voluntary commitments. Without them, AI companies may continue to deprioritize safety measures to meet ambitious launch dates. Only this week, weâve seen revelations that evaluations for GPT-40 were condensed into a single week.
While the Center for AI Policy (CAIP) hopes that the SEC investigates these claims and acts accordingly, the fact remains that existing protections are insufficient. This complaint invokes Dodd-Frank and the Sarbanes-Oxley Act (SOX), which were designed to protect employees of public companies against retaliation for sharing securities violations or fraud with the SEC. These protections cannot be extended to employees of private AI companies. Even for employees of public companies, these whistleblower protections wonât cover them unless they can link concerns about AI safety to securities violations.
Outside of SEC protections, some states have a âpublic policy exceptionâ which protects whistleblowers against termination for disclosures that are in the public interest. However, the definition of public interest is entirely at the discretion of the states and there is no guarantee it would encompass AI safety concerns. How can we expect whistleblowers to put their careers and livelihoods on the line for protections that are subject to interpretation?
Californiaâs proposed AI bill is a step in the right direction. The bill includes protections for whistleblower employees who report noncompliance with SB 1047, but, of course, these protections are geographically limited.
The US needs dedicated, federal whistleblower protections for employees of AI companies who reveal safety, privacy, or other ethical violations. Given how far away federal AI safety regulation may be, it is critical that these protections are not limited to reporting illegal activity.
We already have dedicated protections for sectors such as aviation, food safety, environmental protection, and mining. Dodd-Frank and SOX were both introduced in the last twenty-five years to address specific gaps in whistleblower protections. In both these cases, it took severe consequences for Congress to legislate, with the 2007-2008 Financial Crisis prompting Dodd-Frank and Enronâs corporate fraud triggering SOX.
Letâs not wait any longer to see what the consequences of poor AI safety could be. Congress should introduce federal whistleblower protections for AI employees today.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business