Why America Needs AI Legislation

,
September 17, 2023

This decade, AI could be powerful enough to cause global catastrophes.

Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.

  • OpenAI: “While superintelligence seems far off now, we believe it could arrive this decade.”

AI progress can occur rapidly.

AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.

  • Experts warn AI systems will soon be able to engineer pandemics, orchestrate novel cyberattacks, and disrupt critical infrastructure.
  • AI companies themselves warn that "the vast power of superintelligence… could lead to the disempowerment of humanity or even human extinction."

Companies are not sure that they will be able to control very powerful AI.

Top AI companies admit that their current practices could be insufficient for handling anticipated future AI systems.

  • This is both a safety and a security issue.
  • AI systems could be misused to cause grievous harm.
  • AI systems themselves could also get out of developers’ control.

Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.

We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.

America needs the capacity to rapidly identify and respond to AI risks.

This requires:

  • more visibility into the development of advanced general-purpose AI systems like GPT-4,
  • clear mechanisms to halt unsafe development in case of an emergency, and
  • better incentives that support developers prioritizing safety from the outset.

That's why we’re calling for:

  • a government registry of the advanced hardware used to train new AIs,
  • permitting requirements for frontier AI development, and
  • strict liability for severe harms caused by AI systems.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more