Democratizing AI Governance

,
August 24, 2024

At the Democratic National Convention this week, the Center for AI Policy sponsored a mobile billboard to highlight the need for democratizing AI governance in the U.S. The 15-second ad - which can be viewed here - makes a simple point: “Sam Altman is not Uncle Sam. Don’t put him in charge of AI safety.” The OpenAI CEO is a symbol of the unhealthy concentration of power within the industry. He is filling a vacuum left by the absence of Congressional leadership.

Sam Altman is not Uncle Sam. Don’t put him in charge of AI safety.

Altman has made himself a central figure in the policy debates about accelerating the pace of AI development and addressing its risks. He made waves with an international tour to raise trillions of dollars, including from the United Arab Emirates, to expand the world’s ability to power AI. He warned of the threat from authoritarian governments using AI to cement their power and urged the U.S. to take a leadership role with AI to ensure a democratic future. He even called for the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.” 

His warnings of the threat from AI and the need for regulating the most powerful AI systems, however, have not been backed by consistent support for AI safety measures. Under his leadership, OpenAI has disbanded its key safety team and purportedly violated its own safety protocols. Behind closed doors, lobbyists hired by OpenAI pushed hard to weaken the EU’s AI Act. At the risk of stating the obvious, the people with the most money and power to gain from accelerating AI development cannot be relied upon to manage the massive disruption AI will introduce to the economy, civil liberties, and global safety.

Dealing with the risks of AI requires impartial oversight, informed by diverse perspectives and free from the pressures of profit- or power-driven motives. AI should be regulated by a federal agency that’s subject to congressional oversight and accountable to the American people. That’s why CAIP’s model legislation outlines the creation of a federal agency responsible for overseeing frontier AI systems, with measures to prevent conflicts of interest and provide for balanced oversight from a diverse body of experts. We’re also championing whistleblower protections as part of our 2024 priorities - a modest step toward countering consolidated authority and pushing back against misuse of power and lack of accountability.

The debate over AI governance is not just about technology, it's about who we trust to look out for the public interest. The future of AI should be guided by input from our democratic institutions, not just dictated to us by a handful of CEOs.

September 2024 Hill Briefing on AI and Education

Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies

September 10, 2024
Learn More
Read more

What South Dakota Thinks About AI: Takeaways from CAIP’s trip

Last week, Brian Waldrip and I traveled to South Dakota, seeking to understand how artificial intelligence (AI) is perceived and approached in the Great Plains.

September 5, 2024
Learn More
Read more

July 2024 Webinar on AI and Autonomous Weapons

Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future

July 29, 2024
Learn More
Read more