The House Bipartisan Artificial Intelligence Task Force released its long-anticipated AI policy report in the last days of this Congress. The report offers thoughtful and comprehensive analysis, but it falls short of addressing the most crucial challenge of our time: preventing catastrophic risks from advanced artificial intelligence. In 273 pages, the report uses “catastrophic” only once in a footnote.
After two years of congressional deliberation on artificial intelligence, we need more than careful analysis — we need decisive action. AI development is accelerating rapidly, with new and more powerful systems deployed every few months. Without new guardrails, these AI systems pose extreme risks to humanity’s future.
Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP), published an op-ed in The Hill this week about these risks. To learn more about the guardrails CAIP recommends, read the full op-ed here.
A federal program with potential to support AI explainability research by expanding access to advanced computing infrastructure should be made permanent and fully funded.
A group of AI researchers and forecasting experts just published their best guess of the near future of AI.
New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.