Jason Green-Lowe

Executive Director

Jason has over a decade of experience as a product safety litigator and as a nonprofit compliance counselor. He has a JD from Harvard Law and teaches a course on AI Governance.

AI Is Lying to Us About How Powerful It Is

Opinion

Dec 10, 2024

AI Is Lying to Us About How Powerful It Is

We have hard evidence that AI is lying, scheming, and protecting itself, but developers don’t care

Read more
Center for AI Policy Statement on David Sacks as the White House "AI and Crypto Czar"

Press

Dec 5, 2024

Center for AI Policy Statement on David Sacks as the White House "AI and Crypto Czar"

CAIP looks forward to working with David Sacks and the Trump administration

Read more
Finding the Evidence for Evidence-Based AI Regulations

Opinion

Dec 3, 2024

Finding the Evidence for Evidence-Based AI Regulations

There’s more science to be done, but it’s not too early to start collecting reports from AI developers

Read more
CAIP Celebrates the International Network of AI Safety Institutes

Opinion

Nov 26, 2024

CAIP Celebrates the International Network of AI Safety Institutes

The United States hosted the inaugural meeting of a growing global network

Read more
Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Opinion

Nov 20, 2024

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

Read more
The AI Safety Landscape Under a New Donald Trump Administration

Press

Nov 7, 2024

The AI Safety Landscape Under a New Donald Trump Administration

Significant changes appear imminent in America’s AI policy landscape

Read more
Center for AI Policy (CAIP) Congressional Endorsements for Election 2024

Press

Oct 30, 2024

Center for AI Policy (CAIP) Congressional Endorsements for Election 2024

CAIP conducted a survey of notable races this election cycle

Read more
Fake AI Reviews Are the First Step on a Slippery Slope to an AI-Driven Economy

Opinion

Oct 24, 2024

Fake AI Reviews Are the First Step on a Slippery Slope to an AI-Driven Economy

The FTC enacted its final rule banning fake AI reviews

Read more
When Polling Is Ahead of Politicians

Opinion

Oct 16, 2024

When Polling Is Ahead of Politicians

Many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition

Read more
Letter to the Editors of the Financial Times Re: SB 1047

Opinion

Oct 15, 2024

Letter to the Editors of the Financial Times Re: SB 1047

States cannot afford to wait any longer to begin setting AI safety rules

Read more
Sam Altman’s Dangerous and Unquenchable Craving for Power

Opinion

Oct 9, 2024

Sam Altman’s Dangerous and Unquenchable Craving for Power

No one man, woman, or machine should have this much power over the future of AI

Read more
CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics

Press

Oct 8, 2024

CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics

The "godfather of AI" is concerned about the dangers of the technology he helped create

Read more
CAIP Condemns Governor Newsom’s Veto of Critical AI Regulation Bill

Press

Sep 30, 2024

CAIP Condemns Governor Newsom’s Veto of Critical AI Regulation Bill

California Governor Gavin Newsom has vetoed SB 1047, a crucial bill to ensure the responsible development and deployment of AI

Read more
Memo: Walz-Vance Debate and the Hope for Hearing AI Policy Positions

Opinion

Sep 30, 2024

Memo: Walz-Vance Debate and the Hope for Hearing AI Policy Positions

CAIP hopes that Walz and Vance will tell their fellow Americans where they stand on AI safety legislation

Read more
There's No Middle Ground for Gov. Newsom on AI Safety

Opinion

Sep 24, 2024

There's No Middle Ground for Gov. Newsom on AI Safety

If Governor Newsom cares about AI safety, he'll sign SB 1047

Read more
Reflections on AI in the Big Apple

Opinion

Sep 20, 2024

Reflections on AI in the Big Apple

CAIP traveled to New York City to hear what local AI professionals have to say about AI's risks and rewards

Read more
OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Opinion

Sep 19, 2024

OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Who is vouching for the safety of OpenAI’s most advanced AI system?

Read more
AP Poll Shows Americans’ Ongoing Skepticism of AI

Opinion

Sep 17, 2024

AP Poll Shows Americans’ Ongoing Skepticism of AI

A new polls shows once again that the American public is profoundly skeptical of AI and worried about its risks

Read more
CAIP Welcomes Useful AI Bills From House SS&T Committee

Policy

Sep 11, 2024

CAIP Welcomes Useful AI Bills From House SS&T Committee

CAIP calls on House leadership to promptly bring these bills to the floor for a vote

Read more
Presidential Candidates Disappointingly Quiet on AI

Press

Sep 10, 2024

Presidential Candidates Disappointingly Quiet on AI

The voters have a right to know what their Presidential candidates will do to keep Americans safe in the age of AI

Read more
Memo: The Harris-Trump Debate + Safe AI

Opinion

Sep 6, 2024

Memo: The Harris-Trump Debate + Safe AI

The Center for AI Policy (CAIP) believes the 2024 Presidential candidates need to take a stand on AI safety

Read more
TikTok Lawsuit Highlights the Growing Power of AI

Opinion

Sep 4, 2024

TikTok Lawsuit Highlights the Growing Power of AI

A 10-year-old girl accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed.

Read more
Governor Newsom Must Support SB 1047

Policy

Sep 3, 2024

Governor Newsom Must Support SB 1047

The Center for AI Policy (CAIP) organized and submitted the following letter to California Gavin Newsom urging him to sign SB 1047.

Read more
Democratic Platform Nails AI Strategy But Flubs AI Tactics

Opinion

Aug 21, 2024

Democratic Platform Nails AI Strategy But Flubs AI Tactics

Last Monday night (8/19/24), the Democratic Party approved its 2024 Party Platform. The platform’s general rhetoric hits all the key themes of AI safety.

Read more
You Can't Win the AI Arms Race Without Better Alignment

Opinion

Aug 19, 2024

You Can't Win the AI Arms Race Without Better Alignment

Even if we plug the holes in our porous firewalls, there’s another problem we have to solve in order to win an AI arms race: alignment.

Read more
You Can’t Win the AI Arms Race Without Better Cybersecurity

Opinion

Aug 13, 2024

You Can’t Win the AI Arms Race Without Better Cybersecurity

Reflections on a trip to DEFCON 2024

Read more
Assessing Amazon's Call for 'Global Responsible AI Policies'

Opinion

Aug 1, 2024

Assessing Amazon's Call for 'Global Responsible AI Policies'

Why government oversight must complement corporate commitments

Read more
Senate Commerce Committee Advances Landmark Package of Bipartisan Legislation Promoting Responsible AI

Press

Jul 31, 2024

Senate Commerce Committee Advances Landmark Package of Bipartisan Legislation Promoting Responsible AI

Four bills advance to ensure c​ommonsense AI governance and innovation in the United States

Read more
Meta Conducts Limited Safety Testing of Llama 3.1

Opinion

Jul 26, 2024

Meta Conducts Limited Safety Testing of Llama 3.1

Meta essentially ran a closed-source safety check on an open-source AI system

Read more
CAIP ​Responds to Altman's AI ​Governance ​Op-Ed

Press

Jul 25, 2024

CAIP ​Responds to Altman's AI ​Governance ​Op-Ed

Calling for ​more decisive congressional action on AI safety

Read more
US Senators Demand AI Safety Disclosure From OpenAI

Press

Jul 23, 2024

US Senators Demand AI Safety Disclosure From OpenAI

Center for AI Policy applauds Senate action, calls for comprehensive AI safety legislation

Read more
How to Advance 'Human Flourishing' in the GOP's Approach to AI

Opinion

Jul 19, 2024

How to Advance 'Human Flourishing' in the GOP's Approach to AI

To promote human flourishing, AI tools must be safe

Read more
OpenAI's Undisclosed Security Breach

Opinion

Jul 12, 2024

OpenAI's Undisclosed Security Breach

The security breach at OpenAI should raise serious concerns among policymakers

Read more
Statement on Google's AI Principles

Opinion

Jul 9, 2024

Statement on Google's AI Principles

Like the links on the second page of Google’s search results, these principles are something of a mixed bag

Read more
Letter to the Editor of Reason Magazine

Opinion

Jul 8, 2024

Letter to the Editor of Reason Magazine

Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading

Read more
Supreme Court’s Chevron Ruling Underscores the Need for Clear Congressional Action on AI Regulation

Press

Jun 28, 2024

Supreme Court’s Chevron Ruling Underscores the Need for Clear Congressional Action on AI Regulation

Congress will need to expressly delegate the authority for a technically literate AI safety regulator

Read more
AI Concerns Absent From the Presidential Debate

Press

Jun 27, 2024

AI Concerns Absent From the Presidential Debate

The American people deserved to hear how their potential leaders intend to confront AI risks

Read more
Memo: Thursday's Debate and "Scary AI"

Opinion

Jun 25, 2024

Memo: Thursday's Debate and "Scary AI"

Both Biden and Trump agree that AI is scary, but what do they plan to do about these dangers?

Read more
Hickenlooper on AI Auditing Standards

Opinion

Jun 13, 2024

Hickenlooper on AI Auditing Standards

Qualified third parties should audit AI systems and verify their compliance with federal laws and regulations

Read more
Apple Intelligence: Revolutionizing the User Experience While Failing to Confront AI's Inherent Risks

Opinion

Jun 11, 2024

Apple Intelligence: Revolutionizing the User Experience While Failing to Confront AI's Inherent Risks

We hope that at their next product launch, Apple will address AI safety

Read more
Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Opinion

Jun 4, 2024

Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Aschenbrenner argues that AI systems will improve rapidly

Read more
OpenAI Safety Team's Departure is a Fire Alarm

Opinion

May 20, 2024

OpenAI Safety Team's Departure is a Fire Alarm

The responsible thing to do is to take their warnings seriously

Read more
The Senate's AI Roadmap to Nowhere

Press

May 16, 2024

The Senate's AI Roadmap to Nowhere

The Bipartisan Senate AI Working Group has given America a roadmap for AI, but the roadmap has no destination.

Read more
What’s Missing From NIST's New Guidance on Generative AI?

Opinion

May 13, 2024

What’s Missing From NIST's New Guidance on Generative AI?

Our views on the latest AI resources from NIST

Read more
Who’s Actually Working on Safe AI at Microsoft?

Opinion

May 3, 2024

Who’s Actually Working on Safe AI at Microsoft?

Unpacking the details of Microsoft's latest announcement about expanding its responsible AI team

Read more
Should Big Tech Determine if AI Is Safe?

Opinion

May 2, 2024

Should Big Tech Determine if AI Is Safe?

Right now, only Big Tech gets to decide whether AI systems are safe

Read more
Comment on the Commerce Department's Proposed Cloud Computing Rules

Policy

May 1, 2024

Comment on the Commerce Department's Proposed Cloud Computing Rules

Recommendations for enhancing US cloud security

Read more
There’s Nothing Hypothetical About Genius-Level AI

Opinion

Apr 8, 2024

There’s Nothing Hypothetical About Genius-Level AI

Genius-level AI will represent a total paradigm shift

Read more