The Elite Eight: AI Safety Ideas with Broad Stakeholder Support

March 26, 2025

In honor of March Madness, here are the "Elite Eight" proposals to address AI risk. This is a list of recommendations that would be both impactful and feasible given the stakeholder support. 

National Security Audits

Evaluate AI for National Security Risks

The best solution to the dangers posed by uncontrolled AI would be to require that all new AI models above a certain capability threshold be evaluated by independent experts and demonstrated to be secure before they are deployed. Instead of relying exclusively on a company's own marketing materials to conclude that their products are under control, the US should be conducting third-party national security audits.

This concept has support from both industry and Congress:

  • Anthropic has highlighted the need for the federal government to develop robust testing capabilities for powerful AI systems, noting that its AI model "demonstrates concerning improvements in its capacity to support aspects of biological weapons development."
  • The Frontier Model Forum - whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI - suggested that the US should continue to support national security testing and evaluation.
  • Google: "For the most capable frontier AI systems, the Administration should identify potential capabilities that could raise national security risks and work with industry to develop and promote standardized industry protocols, secure data-sharing, standards, and safeguards."
  • Last year, bipartisan legislation was introduced that envisioned a federal process in which technical experts review advanced AI models to ensure they incorporate robust safeguards against potential misuse in weapons of mass destruction or automated cyberattacks.
Risk Management Frameworks

Require AI Companies to Publish Risk Management Frameworks

Most leading AI developers have published risk management frameworks that explain their procedures for evaluating models and reducing the risk of dangerous capabilities being misused and causing serious harm. However, these frameworks vary in terms of detail and approach to risk, and some AI companies have failed to fulfill their commitments to publish safety frameworks. We need assurance that the companies developing significant AI capabilities have effective approaches to manage risk.

Requiring companies to publish risk management frameworks does not mean disclosing confidential or proprietary information that could compromise business interests or national security. Instead, the goal is to provide sufficient transparency about the overall approach, governance structures, roles and responsibilities, and high-level strategies companies use to mitigate AI risks. While independent national security evaluations (#1) offer external validation of an AI system's safety and security, the published risk management frameworks ensure ongoing accountability, transparency, and clarity about how companies manage and respond to risks as AI technology evolves.

Secure Frontier Labs

Bolster Security at Frontier Labs

OpenAI called for protecting AI systems from theft by the PRC and other malign actors. The federal government should invest in programs that help US AI companies protect their advanced technology. As TechNet (a network representing many of the leading AI developers, deployers, users, and researchers) observed, "Efforts to promote America's AI leadership will be meaningless if adversaries can steal sensitive intellectual property such as AI model weights or proprietary technology or launch attacks against critical AI infrastructure."

Invest in AI Explainability Research

Invest in AI Explainability Research

Even AI engineers cannot fully explain or control how AI models reach their outputs. There are promising lines of research that could solve this problem, which is crucial for safety as well as American dominance in AI, but researchers have insufficient access to computing power to pursue them. One way to address this critical gap is by investing in resources like the National AI Research Resource (NAIRR) and ensuring that more computing power is allocated for academics to research explainability.

There is broad support for federal investment in AI science from industry stakeholders, including Google, TechNet, and Business Roundtable. The US Chamber of Commerce supports research to address potential national security risks associated with advanced AI, and the Business Software Alliance suggests expanding the NAIRR program to "a fully developed, permanent US government initiative with a broader scope, more participants, and sufficient funding."

Export Controls

Strengthen Enforcement of Export Controls

National security and economic leadership depend upon preventing advanced AI from being developed by, or falling into the hands of, bad actors. The Bureau of Industry and Security (BIS) is responsible for enforcing export controls on advanced chips and AI model weights, but BIS lacks adequate staff and technology. As the US Chamber of Commerce explained, BIS "will need to be adequately resourced and modernized, to monitor the AI supply chain and counter-smuggling and other technology diversion efforts to foreign adversaries." BIS should receive $75 million in additional annual funding to hire adequate staff, along with a one-time additional payment of $100 million to immediately address IT issues.

Preparedness Exercises

Conduct Preparedness Exercises

Last year, DHS CISA held a first-of-its-kind tabletop exercise with the private sector focused on effective and coordinated responses to AI security incidents. This had strong support from industry participants. For example, OpenAI said, "This initiative not only strengthens our defenses but also fosters a community dedicated to collective security advancements."

The federal government should enhance this program and require frontier AI companies to participate. These exercises should involve evaluation of realistic scenarios such as autonomous cyberattacks, AI-driven biological threats, and AI-enabled drone attacks on infrastructure. Proactive, collaborative planning will help both the public and private sector prepare to respond to AI-driven crises swiftly and effectively.

Report Cyber Breaches

Require AI Companies to Report Cyber Breaches

Tracking how well frontier AI companies are guarding their technology and any upticks in attacks on these systems will support public and private efforts to preserve American AI leadership. OpenAI has called AI "critical infrastructure," and Google suggested that "Expanded threat sharing with industry will similarly help identify and disrupt both security threats to AI and threat actor use of AI." Frontier AI developers should be required to inform the government when their systems are breached by outside hackers. This could be achieved by expanding an existing framework for tracking cyber attacks on critical infrastructure.

Preserve NIST's Expertise

Preserve NIST's Expertise

The National Institute of Standards and Technology (NIST) is statutorily required to support the development of technical standards and guidelines that promote trustworthy AI systems. It has made valuable contributions to AI risk management and evaluations, and there is broad support for preserving and potentially expanding NIST's work:

  • The R Street Institute suggested directing NIST "to establish risk tolerance parameters and best practices for AI in security."
  • The US Chamber of Commerce highlighted NIST's valuable contribution to managing AI risk.
  • The Frontier Model Forum notes that NIST is "well-equipped to develop robust processes for responsibly managing the national security and public safety risks of frontier AI systems."
  • Google suggests NIST can lead on creating evaluations for major AI risks, developing guidelines for responsible scaling and security protocols, and researching and developing safety benchmarks and mitigations.
  • Center for New American Studies: "The administration should empower the AISI as a hub of AI expertise for the broader federal government to ensure AI strengthens rather than undermines U.S. national security."

Continued investment in NIST's AI talent strengthens its capacity to create standards and best practices that accelerate AI progress and mitigate risks.

As AI becomes more capable, we’re starting to better understand the threat of a loss: non-expert development of chem-bio weapons; psychological harm; and rapid increases in agentic action. Unreliable AI could deliver a shock with consequences far exceeding a 16-seed knocking off a 1-seed. When it comes to innovation and the benefits of AI, the US has the talent to win, but if we fail to play defense, we risk a major upset. 

1. Evaluate AI for National Security Risks 

The best solution to the dangers posed by uncontrolled AI would be to require that all new AI models above a certain capability threshold be evaluated by independent experts and demonstrated to be secure before they are deployed. Instead of relying exclusively on a company’s own marketing materials to conclude that their products are under control, the US should be conducting third-party national security audits.

This concept has support from both industry and Congress: 

  • Anthropic has highlighted the need for the federal government to develop robust testing capabilities for powerful AI systems, noting that its AI model “demonstrates concerning improvements in its capacity to support aspects of biological weapons development.” 
  • The Frontier Model Forum - whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI - suggested that the US should continue to support national security testing and evaluation. 
  • Google: “For the most capable frontier AI systems, the Administration should identify potential capabilities that could raise national security risks and work with industry to develop and promote standardized industry protocols, secure data-sharing, standards, and safeguards.”
  • Last year, bipartisan legislation was introduced that envisioned a federal process in which technical experts review advanced AI models to ensure they incorporate robust safeguards against potential misuse in weapons of mass destruction or automated cyberattacks. 

2. Require AI Companies to Publish Risk Management Frameworks 

Most leading AI developers have published risk management frameworks that explain their procedures for evaluating models and reducing the risk of dangerous capabilities being misused and causing serious harm. However, these frameworks vary in terms of detail and approach to risk, and some AI companies have failed to fulfill their commitments to publish safety frameworks. We need assurance that the companies developing significant AI capabilities have effective approaches to manage risk. 

Requiring companies to publish risk management frameworks does not mean disclosing confidential or proprietary information that could compromise business interests or national security. Instead, the goal is to provide sufficient transparency about the overall approach, governance structures, roles and responsibilities, and high-level strategies companies use to mitigate AI risks. While independent national security evaluations (#1) offer external validation of an AI system’s safety and security, the published risk management frameworks ensure ongoing accountability, transparency, and clarity about how companies manage and respond to risks as AI technology evolves.

3. Bolster Security at Frontier Labs

OpenAI called for protecting AI systems from theft by the PRC and other malign actors. The federal government should invest in programs that help US AI companies protect their advanced technology. As TechNet (a network representing many of the leading AI developers, deployers, users, and researchers) observed, “Efforts to promote America’s AI leadership will be meaningless if adversaries can steal sensitive intellectual property such as AI model weights or proprietary technology or launch attacks against critical AI infrastructure.” 

4. Invest in AI Explainability Research

Even AI engineers cannot fully explain or control how AI models reach their outputs. There are promising lines of research that could solve this problem, which is crucial for safety as well as American dominance in AI, but researchers have insufficient access to computing power to pursue them. One way to address this critical gap is by investing in resources like the National AI Research Resource (NAIRR) and ensuring that more computing power is allocated for academics to research explainability. 

There is broad support for federal investment in AI science from industry stakeholders, including Google, TechNet, and Business Roundtable. The US Chamber of Commerce supports research to address potential national security risks associated with advanced AI, and the Business Software Alliance suggests expanding the NAIRR program to “a fully developed, permanent US government initiative with a broader scope, more participants, and sufficient funding.”

5. Strengthen Enforcement of Export Controls

National security and economic leadership depend upon preventing advanced AI from being developed by, or falling into the hands of, bad actors. The Bureau of Industry and Security (BIS) is responsible for enforcing export controls on advanced chips and AI model weights, but BIS lacks adequate staff and technology. As the US Chamber of Commerce explained, BIS “will need to be adequately resourced and modernized, to monitor the AI supply chain and counter-smuggling and other technology diversion efforts to foreign adversaries.” BIS should receive $75 million in additional annual funding to hire adequate staff, along with a one-time additional payment of $100 million to immediately address IT issues.

6. Conduct Preparedness Exercises

Last year, DHS CISA held a first-of-its-kind tabletop exercise with the private sector focused on effective and coordinated responses to AI security incidents. This had strong support from industry participants. For example, OpenAI said, “This initiative not only strengthens our defenses but also fosters a community dedicated to collective security advancements.”

The federal government should enhance this program and require frontier AI companies to participate. These exercises should involve evaluation of realistic scenarios such as autonomous cyberattacks, AI-driven biological threats, and AI-enabled drone attacks on infrastructure. Proactive, collaborative planning will help both the public and private sector prepare to respond to AI-driven crises swiftly and effectively. 

7. Require AI Companies to Report Cyber Breaches 

Tracking how well frontier AI companies are guarding their technology and any upticks in attacks on these systems will support public and private efforts to preserve American AI leadership. OpenAI has called AI “critical infrastructure,” and Google suggested that “Expanded threat sharing with industry will similarly help identify and disrupt both security threats to AI and threat actor use of AI.” Frontier AI developers should be required to inform the government when their systems are breached by outside hackers. This could be achieved by expanding an existing framework for tracking cyber attacks on critical infrastructure. 

8. Preserve NIST’s Expertise

The National Institute of Standards and Technology (NIST) is statutorily required to support the development of technical standards and guidelines that promote trustworthy AI systems. It has made valuable contributions to AI risk management and evaluations, and there is broad support for preserving and potentially expanding NIST’s work:

  • The R Street Institute suggested directing NIST “to establish risk tolerance parameters and best practices for AI in security.” 
  • The US Chamber of Commerce highlighted NIST’s valuable contribution to managing AI risk. 
  • The Frontier Model Forum notes that NIST is “well-equipped to develop robust processes for responsibly managing the national security and public safety risks of frontier AI systems.” 
  • Google suggests NIST can lead on creating evaluations for major AI risks, developing guidelines for responsible scaling and security protocols, and researching and developing safety benchmarks and mitigations.
  • Center for New American Studies: “The administration should empower the AISI as a hub of AI expertise for the broader federal government to ensure AI strengthens rather than undermines U.S. national security.”

Continued investment in NIST’s AI talent strengthens its capacity to create standards and best practices that accelerate AI progress and mitigate risks. 

As the dominant player in AI, the US must lead. It’s time to move past the rhetoric that any interventions would risk American dominance, and effect meaningful change. The Center for AI Policy is advocating for these eight common-sense measures. To learn more about CAIP’s policy recommendations, read our comments to the Trump Administration on development of an AI Action Plan and CAIP’s 2025 advocacy priorities

(Note: the industry positions described here are largely drawn from responses for the Trump Administration's AI Action Plan. These are offered as examples and are not intended to represent an exhaustive analysis of stakeholder opinions.)

Public Support for AI Regulation

A majority of the American public supports government regulation of AI

Read more

Overview of Emergent and Novel Behavior in AI Systems

Examining how increasingly advanced AI systems develop new kinds of abilities

Read more

Public Opinion on a Federal Office for AI

The majority of Americans support creating an AI oversight body

Read more