The Center for AI Policy (CAIP) has repeatedly warned that OpenAI’s CEO, Sam Altman, is less public-spirited than he seems.
CAIP is pleased to see Robert Wright’s opinion piece in the Washington Post, entitled “Sam Altman’s Imperial Reach.”
Wright joins what CAIP has said numerous times by warning elected officials and regulators that Sam Altman’s “boundless ambition is putting AI, and the world, on a dangerous path.” In this opinion piece, Wright suggests that Sam Altman has a history of doing and saying whatever is necessary to maximize his own power.
Not the power of OpenAI, but Altman’s power.
Altman’s mentor at Y Combinator, Paul Graham, told a journalist in 2016, "Sam is extremely good at getting powerful.”
In a 2008 blog post on startup fundraising, Graham posted this vivid description of Altman’s imperial skills. “You could parachute him into an island full of cannibals and come back in five years, and he’d be the king.” The blog post was a foreshadowing as Graham - about six years after Altman joined Y Combinator, Graham stepped down as the leader and left Altman as Y Combinator’s new president - aka king.
When OpenAI was founded in 2015, Altman repeatedly expressed an interest in reducing artificial intelligence's existential risks—one of the passions shared by his co-founder (and major investor), Elon Musk.
A few years later, Musk left OpenAI, and Altman’s interest in existential risk withered away. Once Altman had Musk’s money, existential risk was no longer a top priority, and Altman could stop pretending to care about safety.
Last year, Altman shocked many on Capitol Hill and within Big Tech by openly calling for regulation of AI companies. He made a great first impression in congressional offices, where many elected officials and staffers focused on AI safety thought they had found a significant ally who shared their interest in public safety surrounding this emerging technology.
However, after OpenAI had a chance to triple its lobbying spending and gather more power in DC, the company publicly changed its tune on Capitol Hill: instead of calling for safety regulations, Altman now calls for public subsidies on the semiconductor factories and data centers he needs to expand his business.
The Center for AI Policy believes that no one man, woman, or machine should have this much power over the future of AI—especially when all signs suggest they are unwilling to put their private quest for power ahead of the public benefit.
The only natural antidote to an increasingly powerful and relentlessly ambitious Sam Altman is to wake up Uncle Sam.
The federal government needs to pass AI safety legislation immediately so that the American public's needs will be reflected in AI developers’ final decisions—not just their public relations campaigns, which hide their ulterior motives and ultimate goals.
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Op-ed in Tech Policy Press on AI safety testing and US-China competition
For the US to lead on AI, AISI needs resources and formal authorization from Congress