Google has “long said AI is too important not to regulate, and too important not to regulate well.” Two weeks ago, Google elaborated on what they mean by “regulating well” by releasing 7 principles for getting AI regulation right. Like the links on the second page of Google’s search results, these principles are something of a mixed bag.
What Google Gets Right
The Center for AI Policy (CAIP) supports Google’s first, third, and seventh principles. It makes sense to “support responsible innovation,” “strike a sound copyright balance,” and “strive for alignment” across different levels of government. We particularly like Google’s call for machine-readable features that allow a website to opt out of sharing its data for AI training. Adding a few lines of code that makes data-scraping tools more responsible is a great example of a concrete, straightforward win that can be implemented this year by AI developers, even before Congress makes it a legal requirement.
Google’s second principle calls for “focusing on outputs…rather than trying to manage fast-evolving computer science and deep-learning techniques.” Taken literally, of course it makes sense to look at the consequences of an AI’s behavior: it’s the AI’s impact on the world, not its internal structure, that makes an AI good or bad for the public.
Sufficiently Advanced AI Needs Preemptive Safety Measures
However, we have to ask: is Google saying that new kinds of AI should go totally unregulated unless and until they start killing people? The “outputs” of AlphaFold are totally harmless protein structures, but the same deep learning principles that allowed AlphaFold to identify new medicines could also be used to identify new poisons or even a new super-virus. The outputs of Stable Diffusion were harmless pictures of cats at the beach, right up until people started using it to make non-consensual deepfake pornography of teenagers. Sometimes, the only way to get traction on the severe harms that can come from a new AI model is to look under the hood and analyze its code – if we wait until it’s apparent that there’s a “real issue” with a new model, then it might be too late to protect the public against that model’s harms.
It’s fine to “wait and see” what the real issues are if the worst-case scenario is creating pictures that misrepresent history, but as AI grows more powerful, we could be looking at threats to essential infrastructure, creation of weapons of mass destruction, or persistent digital scam artists that reproduce themselves and migrate from server to server, like intelligent versions of the computer ‘worms’ of the early 2000s. For these catastrophic risks, we need preemptive safeguards, and sometimes that will mean trying to manage the way new deep-learning techniques are rolled out, as outlined in CAIP’s model legislation. The math itself isn’t going to be illegal, but when math is backed up with $100 million in computing power for a single training run, then it’s reasonable to insist that the training be done safely and with third-party auditors, just like any other major industrial project.
General-Purpose AI Needs a Dedicated Government Regulator
Google’s fourth principle is “plug gaps in existing laws.” While it's true that some problems can be solved with a common-sense application of existing laws, AI is a novel enough technology that it often makes sense to prioritize legislation that protects against AI-specific harms. We should be able to promptly ban non-consensual deepfakes without doing an exhaustive search to see whether there's some clever way of twisting existing libel law until it almost covers the problem.
Similarly, Google’s fifth principle calls for “empowering existing agencies.” This works fine for special-purpose AI that fits neatly into an existing jurisdiction; we may as well have the FAA regulate AI-controlled airplanes. However, many AIs are general-purpose in both their design and their function. If the same AI can play the role of a therapist, an investment advisor, a pharmacist, a taxi driver, and a colonel, then there's no good reason to try to split up the regulation of that AI across 20 different agencies.
Trying to do so would not only create unnecessary red tape by forcing each AI model to comply with dozens of different regulatory regimes, but would also dilute the ability of any one of those regulators to acquire the technical expertise needed to intelligently evaluate a cutting-edge system. It's much better to have general-purpose AI models evaluated by a regulator that's dedicated to that task than to have them evaluated by 20 regulators, none of which really understand the technology well.
Google’s sixth principle (adopt a hub-and-spoke model) suggests that it understands the need for a regulatory “hub” with centralized AI expertise…but this hub can’t be NIST, because NIST is not a regulator and doesn’t want to become one. NIST focuses exclusively on voluntary standards that companies can follow or ignore as they please. When it comes to advanced, general-purpose AI, the stakes are far too high to let each individual company decide whether it feels like keeping its products safe. The Center for AI Policy supports Google’s proposed hub-and-spoke model…but only if the agency at the center of that hub has real regulatory authority.
The Center for AI Policy looks forward to working with Google on commonsense approaches for establishing or strengthening an appropriate regulatory body.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business