California Senate Bill 1047 represented America's best opportunity to date to establish guardrails and rules of the road for emerging artificial intelligence (AI) technology. This bill resulted from extensive collaboration with industry stakeholders and technologists and was amended multiple times to address concerns. Governor Newsom's veto ignores the urgent need for proactive measures to mitigate the risks of advanced AI.
The Financial Times’ editorial rightly notes that it would "be better if safety rules were hashed out and enacted at a federal level." However, with Congress bogged down in election-year politicking, states cannot afford to wait any longer to begin setting AI safety rules.
It may sound superficially appealing to limit regulation of AI to “high-risk environments,” but advanced AI is a general-purpose tool that cannot be easily confined to any one industry. By their very nature, neural networks have the potential to learn and perform a wide range of tasks. For instance, an AI model designed for document translation could end up controlling critical systems like our power grid, cell towers, weapons systems, and stock markets.
SB 1047 would have introduced much-needed accountability measures for large AI companies, such as requiring companies spending over $100 million on AI model training to implement basic safety protocols and maintain the ability to shut down potentially harmful systems.
To ensure the public's safety from potential catastrophic risks, we don't need to "rework the proposed rules" — we need to swiftly pass them into law, whether at the state or federal level. The urgency of AI safety cannot be overstated, and America must act now to prevent serious harm from advanced AI.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business