ChatGPT, OpenAI’s artificial intelligence (AI) fueled chatbot, launched in November 2022 and was an overnight phenomenon, going on to become the fastest growing consumer application in history.
This signaled a new era of AI opportunity and drove AI companies to astonishing capitalization levels. OpenAI tripled in value in less than 10 months to an $80 billion valuation. NVIDIA, a major chipmaker, nearly tripled in value in the same timespan to $1.2 trillion, part of a process of catapulting it to its current spot of the third most valuable company (by market capitalization) in the world.
This new enthusiasm encouraged organizations across the AI spectrum to not only invest in technological developments and advancements, but also to invest in shaping the regulatory environments in which they would be operating.
For a period of time, OpenAI shaped the regulatory space through visits and testimonies from CEO Sam Altman and others in the organization. But it wasn’t long before OpenAI began registering in-house lobbyists, just a year after the release of ChatGPT, their lobbying scaling like their business.
In 2023 they began seriously investing in lobbying, spending $380,000 in total to bring on their first in-house lobbyist, and engaged with traditional lobbying firms. In 2024 they stepped up their game, more than tripling their spend to $1,130,000. They tripled the size of their in-house team, and even brought on former Congressman Norm Coleman to advocate on their behalf.
This is also just what they themselves have spent, not counting other lobbying done by some of their major financial backers. Microsoft, between 2023 and now, has spent nearly $20 million in lobbying1 and made over $1.5 million in political contributions through its political action committee, MSVPAC. It’s hard to specify how much of that lobbying went directly towards AI, but lobbying records indicate it is one of their main focuses.
Some lobbying by OpenAI and others in the AI ecosystem is understandable; they have legitimate business interests to protect. Industry should absolutely have a seat at the table and is a valuable partner in this enterprise. They’re familiar with cutting-edge technology, and they can help adjust potential regulations to make sure that they do not impose unnecessary burdens or target the wrong kinds of software. A healthy dialogue among the private sector, the public sector, and other concerned stakeholders has been a significant contributor to the dynamic American economy and the incubation of new technologies. The Center for AI Policy (CAIP) hopes this dialogue will continue.
However, there’s a difference between responsible lobbying to make sure your business’s needs are heard and recklessly lobbying to block any and all guardrails that might be needed to prevent a public safety catastrophe. In practice, Big Tech has been advocating to block all mandatory safeguards, instead of working to make those safeguards more thoughtful or more narrowly tailored.
Even if the harms from AI are not widely known or well understood, they’re potentially quite severe, and safety efforts aren’t able to keep pace. The group of individuals working to make sure these harms are addressed is growing but small, and has nowhere near the financial capital deployed towards lobbying for safety as major tech companies do towards limiting it.
We need to be sure that the voices at the table are more balanced than the current spending on lobbying - ensuring that voices supporting the safety of the American public carry as much weight as those potentially endangering it.
1 $19,491,000 from adding up all contributions over 2023 and 2024.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business