CAIP work
The Center for AI Policy is developing policy solutions and advocating to policymakers to ensure AI is safe. To learn more about our work, reach out to info@aipolicy.us.
Opinion
Oct 31, 2024
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting
Press
Oct 30, 2024
CAIP conducted a survey of notable races this election cycle
Research
Oct 29, 2024
Inspecting the claim that AI safety and US primacy are direct trade-offs
Opinion
Oct 24, 2024
Reclaim safety as the focus of international conversations on AI
Opinion
Oct 24, 2024
The FTC enacted its final rule banning fake AI reviews
Opinion
Oct 22, 2024
The jury is out on the social and health effects of these tools
Opinion
Oct 17, 2024
Why is there still inaction in Congress?
Opinion
Oct 17, 2024
Unfortunately, not all AI agent applications are low-risk
Opinion
Oct 16, 2024
Many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition
Opinion
Oct 15, 2024
States cannot afford to wait any longer to begin setting AI safety rules
Opinion
Oct 10, 2024
Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities
Opinion
Oct 9, 2024
No one man, woman, or machine should have this much power over the future of AI
Press
Oct 8, 2024
The "godfather of AI" is concerned about the dangers of the technology he helped create
Policy
Oct 8, 2024
CAIP supports these reporting requirements and urges Congress to explicitly authorize them
Research
Oct 7, 2024
Policymakers and engineers should prioritize alignment innovation as AI rapidly develops
Research
Oct 4, 2024
The rapid growth of AI creates areas of concern in the field of data privacy, particularly for healthcare data
Press
Oct 2, 2024
CAIP was featured in Politico's coverage of the SB 1047 veto decision
Opinion
Oct 1, 2024
It’s time for Congress to act
Press
Sep 30, 2024
California Governor Gavin Newsom has vetoed SB 1047, a crucial bill to ensure the responsible development and deployment of AI
Opinion
Sep 30, 2024
CAIP hopes that Walz and Vance will tell their fellow Americans where they stand on AI safety legislation
Opinion
Sep 30, 2024
The Senate Select Committee on Intelligence bipartisanly, publicly, and loudly implored the American people and the private sector to remain vigilant against election interference
Research
Sep 26, 2024
An overview of AI explainability concepts and techniques, along with recommendations for reasonable policies to mitigate risk while maximizing the benefits of these powerful technologies
Opinion
Sep 26, 2024
OpenAI's lobbying has expanded, and it's crowding out dialogue on safety.
Opinion
Sep 24, 2024
If Governor Newsom cares about AI safety, he'll sign SB 1047
Opinion
Sep 23, 2024
The US is punching below its weight when it comes to funding its AI Safety Institute (AISI)
Opinion
Sep 20, 2024
CAIP traveled to New York City to hear what local AI professionals have to say about AI's risks and rewards
Opinion
Sep 19, 2024
Who is vouching for the safety of OpenAI’s most advanced AI system?
Opinion
Sep 18, 2024
Engineers continue discovering techniques that boost AI performance after the main training phase
Press
Sep 17, 2024
Jason Green-Lowe joined the Morning Rush TV show to discuss AI policy
Opinion
Sep 17, 2024
A new polls shows once again that the American public is profoundly skeptical of AI and worried about its risks
Policy
Sep 16, 2024
Response to the initial public draft of NIST's guidelines on misuse risk
Opinion
Sep 13, 2024
America’s best-beloved circles up a crew of technologists, humanists, and a law enforcer on what’s next for humanity in AI
Opinion
Sep 11, 2024
Out of 30 campaign websites reviewed, only 4 had even a single clear position on AI policy
Research
Sep 11, 2024
AI is spreading quickly in classrooms, offering numerous benefits but also risks
Policy
Sep 11, 2024
CAIP calls on House leadership to promptly bring these bills to the floor for a vote
Press
Sep 10, 2024
The voters have a right to know what their Presidential candidates will do to keep Americans safe in the age of AI
Event
Sep 10, 2024
Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies
Policy
Sep 9, 2024
With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed
Opinion
Sep 6, 2024
The Center for AI Policy (CAIP) believes the 2024 Presidential candidates need to take a stand on AI safety
Event
Sep 5, 2024
Last week, Brian Waldrip and I traveled to South Dakota, seeking to understand how artificial intelligence (AI) is perceived and approached in the Great Plains.
Opinion
Sep 4, 2024
A 10-year-old girl accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed.
Policy
Sep 3, 2024
The Center for AI Policy (CAIP) organized and submitted the following letter to California Gavin Newsom urging him to sign SB 1047.
Opinion
Aug 30, 2024
Yet another example why we need safe and trustworthy AI models.
Opinion
Aug 28, 2024
Political campaigns should disclose when they use AI-generated content on radio and television.
Event
Aug 24, 2024
The Center for AI Policy sponsored a mobile billboard to highlight the need for democratizing AI governance in the U.S.
Opinion
Aug 21, 2024
Last Monday night (8/19/24), the Democratic Party approved its 2024 Party Platform. The platform’s general rhetoric hits all the key themes of AI safety.
Opinion
Aug 19, 2024
Even if we plug the holes in our porous firewalls, there’s another problem we have to solve in order to win an AI arms race: alignment.
Research
Aug 13, 2024
How will American AI firms respond to General Purpose AI requirements?
Opinion
Aug 13, 2024
Reflections on a trip to DEFCON 2024
Opinion
Aug 6, 2024
Human-quality speech presents a heightened risk that AI is used for fraud, misinformation, and manipulation
Opinion
Aug 1, 2024
This step forward doesn’t mean that AI is free from sexual exploitation
Opinion
Aug 1, 2024
CAIP proposes that AI companies report their cybersecurity protocols against a set of key metrics
Opinion
Aug 1, 2024
Why government oversight must complement corporate commitments
Press
Jul 31, 2024
Four bills advance to ensure commonsense AI governance and innovation in the United States
Research
Jul 29, 2024
Autonomous weapons are here, development is ramping up, and guardrails are needed
Event
Jul 29, 2024
Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future
Opinion
Jul 26, 2024
Meta essentially ran a closed-source safety check on an open-source AI system
Opinion
Jul 25, 2024
This highlights challenges in anticipating malicious uses of AI
Press
Jul 25, 2024
Calling for more decisive congressional action on AI safety
Policy
Jul 24, 2024
Hopes to build concensus on cybersecurity standards, emergency preparedness, and whistleblower protections
Press
Jul 23, 2024
Center for AI Policy applauds Senate action, calls for comprehensive AI safety legislation
Opinion
Jul 19, 2024
To promote human flourishing, AI tools must be safe
Opinion
Jul 19, 2024
Today's CrowdStrike-Microsoft outage is case in point
Opinion
Jul 18, 2024
As time goes on, the ways in which AI can enhance itself will multiply
Opinion
Jul 16, 2024
AI safety and responsibility are core themes of NATO's AI Strategy
Opinion
Jul 15, 2024
Regardless of the outcome, OpenAI needs stronger whistleblower protections
Opinion
Jul 12, 2024
The security breach at OpenAI should raise serious concerns among policymakers
Opinion
Jul 9, 2024
Corporate incentives do not ensure optimal outcomes for public safety
Opinion
Jul 9, 2024
Like the links on the second page of Google’s search results, these principles are something of a mixed bag
Opinion
Jul 8, 2024
Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading
Press
Jun 28, 2024
Congress will need to expressly delegate the authority for a technically literate AI safety regulator
Research
Jun 27, 2024
AI will both intensify current privacy concerns and fundamentally restructure the privacy landscape
Press
Jun 27, 2024
The American people deserved to hear how their potential leaders intend to confront AI risks
Event
Jun 26, 2024
Protecting Privacy in the AI Era: Data, Surveillance, and Accountability
Opinion
Jun 25, 2024
Both Biden and Trump agree that AI is scary, but what do they plan to do about these dangers?
Opinion
Jun 16, 2024
An op-ed in the Capital Journal
Opinion
Jun 13, 2024
Qualified third parties should audit AI systems and verify their compliance with federal laws and regulations
Opinion
Jun 11, 2024
We hope that at their next product launch, Apple will address AI safety
Opinion
Jun 4, 2024
Aschenbrenner argues that AI systems will improve rapidly
Press
May 21, 2024
The EU AI Act will soon enter into force
Opinion
May 20, 2024
The responsible thing to do is to take their warnings seriously
Press
May 16, 2024
The Bipartisan Senate AI Working Group has given America a roadmap for AI, but the roadmap has no destination.
Opinion
May 13, 2024
Our views on the latest AI resources from NIST
Opinion
May 3, 2024
Unpacking the details of Microsoft's latest announcement about expanding its responsible AI team
Opinion
May 2, 2024
Right now, only Big Tech gets to decide whether AI systems are safe
Policy
May 1, 2024
Recommendations for enhancing US cloud security
Press
Apr 30, 2024
CAIP's views on the new AI framework and bill
Research
Apr 24, 2024
Our research on AI's current and future effects on the labor market
Event
Apr 23, 2024
Surveying the future of work
Press
Apr 18, 2024
CAIP welcomes the release of the bipartisan Future of AI Innovation Act
Explainer
Apr 17, 2024
A majority of the American public supports government regulation of AI
Opinion
Apr 17, 2024
CAIP's Executive Director participated in a panel discussion hosted by the Social Security Administration
Policy
Apr 9, 2024
Our model legislation for requiring that AI be developed safely
Press
Apr 9, 2024
Announcing our proposal for the "Responsible Advanced Artificial Intelligence Act of 2024"
Opinion
Apr 8, 2024
Genius-level AI will represent a total paradigm shift
Press
Apr 3, 2024
Memorandum of Understanding marks a new era in AI safety
Policy
Mar 29, 2024
Assessing the implications of open weight AI models
Press
Mar 28, 2024
March 2024 appearance on WWL First News
Explainer
Mar 26, 2024
Examining how increasingly advanced AI systems develop new kinds of abilities