Broadening AI Regulation Beyond Use Case

Tristan Williams
,
January 3, 2024

Executive Summary

Influential voices like NVIDIA and IBM have suggested regulating AI based on specific use cases and asking existing regulators to oversee AI being used in each industry, with airline regulators tackling airline AI, medical regulators tackling medical AI, etc. This method fails to address the unique risks inherent in new general-purpose AIs (GPAIs) like GPT-4, namely: misuse across a broad array of use cases, unprecedented rapid progress, and rogue systems that evade control.

To properly address these risks and keep the American public safe, we need to establish a central regulator which will:

  • reduce government waste and needless redundancies,
  • bring leadership necessary for coordination,
  • facilitate effective, risk-focused, pre-deployment regulation,
  • introduce much-needed proactivity into AI regulation, and
  • account for novel AI capabilities that fall outside existing regulators.

One promising framework for a central regulator is a tiered approach that categorizes models according to indicators of capabilities, and scales regulatory burden with capabilities.

What has changed?

Regulating by use case made sense as recently as 5 years ago, when essentially all AIs were tailored to narrow circumstances and unable to accomplish tasks outside those circumstances. When AIs were narrowly tailored, we could manage AI risk well by identifying the riskiest use cases and holding AI to higher standards in those domains. However, today’s general-purpose AIs (GPAIs) are importantly different from the narrowly tailored AIs of the past and pose three unique challenges which must be accounted for with general-purpose regulation.

Read the piece in full here.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more