As AI becomes increasingly integrated into military applications, critical infrastructure, and other essential areas - and the possibility of disruptive superintelligence grows more imminent - recent polling by the Pew Research Center demonstrates rising public concern about AI’s impacts. Among the many reasons to be concerned, one fundamental issue stands out: even the engineers developing advanced AI systems often cannot fully explain how these systems reach their decisions.
This lack of explainability - the ability to clearly understand and interpret the decision-making process of AI - is key for delivering AI’s benefits and preventing serious harm. Improving the science of AI explainability will enable the development of more capable systems that serve America's economic competitiveness, national security, and human flourishing. Without adequate explainability, it will be challenging or impossible to guarantee that AI systems won't misidentify targets on the battlefield or recommend incorrect medical treatments. Understanding precisely how an AI system makes decisions is also crucial for establishing effective guardrails against dangerous misuse, such as chemical or biological weapons development.
AI companies have the resources to improve explainability, but their incentives often lead them to prioritize development of “shiny products,” regardless of whether they can explain how those products work. In contrast, America's academic institutions have the right incentives but often lack essential resources - particularly access to the advanced computing hardware (such as powerful GPUs) necessary for cutting-edge research.
The National Artificial Intelligence Research Resource (NAIRR) is a framework for addressing these challenges by expanding access to advanced computing infrastructure. The NAIRR is currently running as a pilot program led by the National Science Foundation. It provides tools for AI research through direct access to federal compute resources and cloud credits or allocations on commercial platforms such as AWS, Google Cloud, and Microsoft Azure.
The NAIRR was launched in early 2024 with initial priority topics including: safe, secure and trustworthy AI; human health; and environment and infrastructure. Federal funding of $30 million was compounded by substantial in-kind contributions from the private sector: $20 million in compute credits on Microsoft Azure; $30 million in overall support for the pilot, including $24 million worth of computing on NVIDIA's DGX Cloud platform; up to $1 million in credits for model access from OpenAI; and a variety of additional computational, data, software, model and training resources from other companies.
Thus, the NAIRR serves as a force-multiplier, leveraging modest federal investments to foster a much larger industry-wide effort aligned with the public interest.
Given its promising start, the NAIRR pilot project should be permanently established and fully funded to realize its potential. The Center for AI Policy supports the CREATE AI Act, a bipartisan bill that would codify the NAIRR, and we have recommended that the Administration ensure adequate funding and designate AI explainability research as a priority issue. As budget negotiations continue in Congress and the Administration develops its AI Action Plan, we will continue advocating for this widely-supported strategy for delivering the benefits of AI while mitigating the risks.
A group of AI researchers and forecasting experts just published their best guess of the near future of AI.
New research from METR reveals AI’s ability to independently complete tasks is accelerating rapidly.
Congress can rein in Big Tech, and specifically address one of our biggest threats, Artificial Intelligence (AI).