Last week, Congress passed the FY 2025 Servicemember Quality of Life Improvement and National Defense Authorization Act (NDAA). This measure is soon expected to be signed into law by President Joe Biden.
The NDAA Includes Several Provisions Useful for AI Safety
The Center for AI Policy (CAIP) appreciates the hard work behind this important legislative milestone. In this challenging and chaotic environment, passing legislation is difficult, let alone a 1,800-page bill authorizing over $883 billion in spending for the Department of Defense.
We commend the Members of the House and Senate Armed Services Committees who made this bill possible, particularly House Chairman Mike Rogers (R-AL) and Ranking Member Adam Smith (D-WA) and Senate Chairman Jack Reed (D-RI) and Ranking Member Roger Wicker (R-MS).
CAIP is pleased to see that the bipartisan and bicameral legislation contains several provisions that support AI safety.
Key accomplishments include:
- Directive language expanding the duties of Chief Digital and Artificial Intelligence Officer Governing Council to “identify and assess AI models and advanced AI technologies that could pose a national security risk if accessed by an adversary of the United States and develop strategies to prevent unauthorized access and usage of potent AI models by countries that are adversaries of the United States.” (Sec. 225)
- A tabletop exercise program is included to prepare to defend against cyber threats to America’s defense industrial base, which CAIP hopes and expects will consist of defense against malicious AI tools. (Sec. 1504)
- A Congressional statement of policy directing that nuclear weapons must remain under meaningful human control, noting that “it is the policy of the United States that the use of AI efforts should not compromise the integrity of nuclear safeguards, whether through the functionality of weapons systems, the validation of communications from command authorities, or the principle of requiring positive human actions in execution of decisions by the President concerning the employment of nuclear weapons.” (Sec. 1638)
- A directive that creates an AI Security Center to manage risk, focusing on developing guidance to mitigate counter-AI techniques and promote secure AI adoption practices for national security systems. (Sec. 6504)
Much Work Remains to be Done in the Coming Year
While CAIP supports the FY25 NDAA, we note that much work remains to be done to adequately secure our defense against the threats posed by advanced AI. The current NDAA makes admirable strides in preparing America to protect against AI-powered enemy action. The next NDAA must also consider and defend against the inherent risks posed by malfunctioning AI.
Some of the policies that CAIP would like to see in the FY26 NDAA include:
- Explicit TEVV (testing, evaluation, validation, and verification) standards for the Defense Department’s procurement of AI systems so that companies who invest in resilient AI will be rewarded for their diligence and will not lose business to competitors who recklessly cut corners while developing AI.
- Additional research on interpretability and explainability is needed so that our military can understand why and how AI-assisted targeting systems make their recommendations.
- Enhanced talent recruitment programs so that the Defense Department will have the in-house technical expertise needed to form an independent judgment about the safety of commercial-off-the-shelf (COTS) AI solutions without being forced to rely on vendors’ potentially self-interested self-evaluations.
- A directive to consider how tasking AI with military objectives (including its use in offensive systems designed to kill other humans) may affect model autonomy risk, i.e., the risk that the AI system will begin acting in its interests rather than following human instructions.
- Expanding the policy statement on meaningful human control, or human-in-the-loop, to include all strategic decisions about escalating the level of violence permitted in a theater of operations. AI may be able to select an appropriate target, but it should never decide whether to start a war.
AI capabilities are expected to increase exponentially over the next year: there is no time to waste.
CAIP looks forward to working with the 119th Congress and House and Senate Armed Services Committee members to ensure that the FY 2026 NDAA gives the Defense Department the resources and guidance it needs to secure America’s military AI fully.