About The Center for AI Policy
Jason Green-Lowe
Executive Director
Marc Ross
Communications Director
Kate Forscey
Government Affairs Director
Marta Sikorski Martin
Director of Development
Mark Reddish
External Affairs Director
Claudia Wilson
Senior Policy Analyst
Brian Waldrip
Government Relations Director
Tristan Williams
Research Fellow
Jakub Kraus
Technical Content Lead
Joe Kwon
Technical Policy Analyst
Iván Torres
National Advocacy Coordinator
David Krueger
David is a Cambridge Computer Science professor. His research group focuses on Deep Learning and AI Alignment. He was also a research director at the UK’s AI Safety Institute.
Gabriel Weil
Gabriel Weil is an Assistant Professor at Touro University Law Center and a Non-Resident Senior Fellow at the Institute for Law & AI. He joined in CAIP board in January 2025 and also serves on the board of PIBBSS.
Jeffrey Ladish
Jeffrey directs Palisade Research and leads AI work at the Center for Humane Technology. He previously worked on security with Anthropic.
Olivia Jimenez
Olivia has experience working on AI policy in industry and in government. Previously, she led programs to build up the AI safety research field at top US and Indian universities. She graduated from Columbia University.
Thomas Larsen
Thomas is a former AI safety researcher. He worked at the Machine Intelligence Research Institute and contracted for OpenAI. He has also been working on developing standards for frontier AI development.
Frequently asked questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.
We’re a small, DC-based team of former AI researchers and policy professionals. We work with a wide network of experts from industry, academia, think tanks, government, and nonprofits. You can read more about us here.
We launched CAIP to address the urgent need for effective AI governance. In 2023, hundreds of AI experts warned that AI could cause catastrophic harm in the near future. At the time, very few researchers were sharing these safety challenges with policymakers or identifying concrete legislative solutions. CAIP is now helping close that gap.
CAIP is grateful for the generous support of our donors. We’re supported primarily by mid-level and major individual donors who share our mission to improve AI governance. Several of these donors built up their wealth during the dot-com boom or by working at hedge funds. To protect the privacy of these individuals, we do not publish their names. We also received seed funding through an organization sponsored by Jaan Tallinn, a former founding engineer at Skype. To maintain our independence, we do not accept funding from companies who are designing or building AI software or hardware. We are nonpartisan and focused squarely on the public interest.
We’re always looking for people to join our team and support our work. We also regularly engage with stakeholders across AI and policy, and we would love to hear from you if you have questions, feedback, or ideas for collaborations.