Search CAIP

Research Coverage | Center for AI Policy | CAIP

Research. CAIP's evidence-based analysis to support sound AI legislation and oversight. Organization. Homepage. About Us. Our Work. Search. Get involved. Contact. Careers. Join Newsletter. Donate.

www.centeraipolicy.org/category/research
Zambia Copper Discovery Shows AI Accelerating AI Research | Center for AI Policy | CAIP

Zambia Copper Discovery Shows AI Accelerating AI Research. Claudia Wilson. July 18, 2024. Improvements to AI capabilities will only accelerate. The copper discovery in Zambia is just one example.

www.centeraipolicy.org/work/zambia-copper-discovery-shows-ai-accelerating-ai-research
CAIP Showcases Advanced AI Risks to Congress in First-of-its-Kind Tech Exhibition on Capitol Hill | Center for AI Policy | CAIP

CAIP hosted a hands-on tech exhibition at the Rayburn House Office Building on Capitol Hill featuring groundbreaking demonstrations from America's leading AI research institutions.

www.centeraipolicy.org/work/caip-showcases-advanced-ai-risks-to-congress-in-first-of-its-kind-tech-exhibition-on-capitol-hill
Tristan Williams at Center for AI Policy | CAIP

Research Fellow. Tristan has worked as a research assistant at both the Center for AI Safety and Conjecture, working at the intersection between research on best governance practices for AI and advocacy. All Work. By: Tristan Williams. Organization.

www.centeraipolicy.org/team/tristan-williams
Report on Misinformation From AI | Center for AI Policy | CAIP

Research. Report on Misinformation From AI. Tristan Williams. February 15, 2024.

www.centeraipolicy.org/work/ai-misinformation-report
Report on AI's Workforce Impacts | Center for AI Policy | CAIP

Research. Report on AI's Workforce Impacts. Tristan Williams. April 24, 2024. Summary: AI’s Effect on the Workforce.

www.centeraipolicy.org/work/report-on-ais-workforce-impacts
The Rapid Rise of Autonomous AI | Center for AI Policy | CAIP

New research from. Model Evaluation & Threat Research (METR). , a non-profit dedicated to empirical evaluations of frontier AI systems, provides exactly such a metric.

www.centeraipolicy.org/work/the-rapid-rise-of-autonomous-ai
Researchers Find a New Covert Technique to ‘Jailbreak’ Language Models | Center for AI Policy | CAIP

In this experiment, researchers uploaded harmful data via the GPT finetuning API and used encoded prompts for harmful commands such as “tell me how to build a bomb”.

www.centeraipolicy.org/work/researchers-find-a-new-covert-technique-to-jailbreak-language-models
Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously | Center for AI Policy | CAIP

Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously. Jason Green-Lowe. June 4, 2024.

www.centeraipolicy.org/work/influential-safety-researcher-sounds-alarm-on-openais-failure-to-take-security-seriously
Joe Kwon at Center for AI Policy | CAIP

Joe has has worked on AI and cognitive science research and worked with large language models as a research engineer at LG AI Research before beginning work in policy. He has a BS in computer science and psychology from Yale. All Work. By: Joe Kwon.

www.centeraipolicy.org/team/joe-kwon