Sam Altman’s Dangerous and Unquenchable Craving for Power

Jason Green-Lowe
,
October 9, 2024

The Center for AI Policy (CAIP) has repeatedly warned that OpenAI’s CEO, Sam Altman, is less public-spirited than he seems.

  • In May, we highlighted the courageous resignation of OpenAI’s safety researchers, who put their stock options at risk to warn the public about dangerous practices at OpenAI.
  • In July, we ran an analysis of Sam Altman’s proposed “AI governance agenda,” which focused on Altman’s opinion that the government should give him more money and turned out not to include any concrete safety proposals.
  • In August, we sent a video billboard to the Democratic National Convention to remind people that Mr. Altman failed to replace the safety engineers who quit after being denied the resources they needed to do their jobs.
  • In September, we called attention to threats made by OpenAI against users trying to explore the “chain of thought” feature in its new o1 model. These threats were alarming because OpenAI rushed o1 to market before its third-party evaluators had enough time to finish their testing.

CAIP is pleased to see Robert Wright’s opinion piece in the Washington Post, entitled “Sam Altman’s Imperial Reach.”

Wright joins what CAIP has said numerous times by warning elected officials and regulators that Sam Altman’s “boundless ambition is putting AI, and the world, on a dangerous path.” In this opinion piece, Wright suggests that Sam Altman has a history of doing and saying whatever is necessary to maximize his own power. 

Not the power of OpenAI, but Altman’s power.

Altman’s mentor at Y Combinator, Paul Graham, told a journalist in 2016, "Sam is extremely good at getting powerful.” 

In a 2008 blog post on startup fundraising, Graham posted this vivid description of Altman’s imperial skills. “You could parachute him into an island full of cannibals and come back in five years, and he’d be the king.” The blog post was a foreshadowing as Graham - about six years after Altman joined Y Combinator, Graham stepped down as the leader and left Altman as  Y Combinator’s new president - aka king.

When OpenAI was founded in 2015, Altman repeatedly expressed an interest in reducing artificial intelligence's existential risks—one of the passions shared by his co-founder (and major investor), Elon Musk. 

A few years later, Musk left OpenAI, and Altman’s interest in existential risk withered away. Once Altman had Musk’s money, existential risk was no longer a top priority, and Altman could stop pretending to care about safety.

Last year, Altman shocked many on Capitol Hill and within Big Tech by openly calling for regulation of AI companies. He made a great first impression in congressional offices, where many elected officials and staffers focused on AI safety thought they had found a significant ally who shared their interest in public safety surrounding this emerging technology. 

However, after OpenAI had a chance to triple its lobbying spending and gather more power in DC, the company publicly changed its tune on Capitol Hill: instead of calling for safety regulations, Altman now calls for public subsidies on the semiconductor factories and data centers he needs to expand his business.

The Center for AI Policy believes that no one man, woman, or machine should have this much power over the future of AI—especially when all signs suggest they are unwilling to put their private quest for power ahead of the public benefit. 

The only natural antidote to an increasingly powerful and relentlessly ambitious Sam Altman is to wake up Uncle Sam. 

The federal government needs to pass AI safety legislation immediately so that the American public's needs will be reflected in AI developers’ final decisions—not just their public relations campaigns, which hide their ulterior motives and ultimate goals.

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more