When the 118th Congress entered its session in early 2023, OpenAI’s ChatGPT had just blown into the mainstream public consciousness, blind-siding casual users and lawmakers alike with its impressive accuracy and, well, humanness. Immediately, Congress made AI a focus of its attention. Directed by Leadership in both chambers, Congress convened experts and introduced multiple frameworks, intending at least at the outset to pass comprehensive legislation that would decisively regulate this new technology.
This attention led to two years of bipartisan, bicameral movement, backed by an unusual consensus that something must be done, and urgently. Yet in the end, despite these concerted efforts (and to be clear, many members and their staff worked tirelessly on these projects), all AI legislative efforts failed.
Although Congress spent considerable time examining AI through hearings and committees, the tangible output as the 118th wrapped amounted to little more than recommendations for future sessions to rely on existing laws where possible and drowsily continue monitoring AI developments, “legislating accordingly”. This wait-and-see approach leaves both Americans and the global community exposed to unprecedented risks as AI capabilities accelerate at a dizzying pace.
During the two years that Congress was writing its non-binding recommendations, we've seen the emergence of AI systems that can generate photorealistic images, create professional-quality videos from text descriptions, clone voices with minimal input, and even compete at the International Mathematical Olympiad level. These aren't mere incremental improvements – they represent quantum leaps in capability that fundamentally alter the landscape of what's possible.
Consider OpenAI's Sora, which can generate highly realistic videos from text prompts, or their Q* (Q-Star) system that demonstrates mathematical reasoning capabilities previously thought to be years away. OpenAI's introduction of constitutional AI and deliberative alignment – where models engage in internal dialogue to refine their responses and align with human values – represents yet another leap forward in AI capabilities.
Or consider that Google's Gemini achieved a silver medal in the International Math Olympiad in August 2024 - no small feat - and by December its “Flash” reasoning model that mimics human cognitive patterns. Anthropic's Claude 3.5 can now interact with computer interfaces as a human would, while Eleven Labs' voice technology can create entirely new voices from text prompts in seconds.These are just a few examples of developments that have occurred in recent months as Congress has deliberated. Each and many more profound implications for employment, creativity, privacy, security, and large-scale safety.
Developers have goals in mind when they design models, but researchers have long cautioned that emergent capabilities may veer in directions neither intended nor controlled by the creators. A stark example emerged when Anthropic discovered its own AI models could engage in strategic deception – deliberately providing false information while concealing their ability to access the truth. Although discovered by way of intentional experiments jointly carried out by Anthropic and the nonprofit Redwood Research, this capacity for calculated dishonesty wasn't programmed in; it emerged spontaneously as an unintended consequence of advanced capability.
The rapid and sometimes unanticipated advancement of these technologies stands in stark contrast to Congress's plodding pace. This is not surprising, as Silicon Valley and Congress basically operate in different timezones, practically and philosophically. But it should be concerning, because while lawmakers deliberate and defer, AI companies forge ahead with minimal oversight, setting their own rules and boundaries. This self-regulation approach is fundamentally flawed – companies naturally prioritize competitive advantage and market share over potential societal impacts.
The stakes couldn't be higher. We're witnessing the emergence of technologies that can manipulate reality itself – generating false but convincing videos, replicating voices, and creating artificial content indistinguishable from human-made work. Without proper legal frameworks, we risk undermining the foundations of truth in public discourse, creative ownership, and personal privacy.
The common counterargument – that hasty regulation might stifle innovation – rings hollow when weighed against the potential consequences of inaction. We don't need to choose between innovation and responsible oversight. Other regions, notably the European Union with its AI Act, and most recently South Korea, have shown that thoughtful regulation is possible without hampering technological progress.
The cost of congressional inaction extends beyond immediate risks. Every day without comprehensive AI legislation is another day where AI systems are deployed without adequate safety testing, transparency requirements, or accountability measures. It's another day where artists, writers, and creators see their work potentially used without consent to train AI models. Each passing day without meaningful regulation widens the gap between technological capability and legal protection writ large and likely irreversibly. (It’s hard to see how to put a Genie back in the bottle, so to speak.)
The stakes also extend far beyond immediate disruption into expanses unknown. Many AI researchers and policymakers believe we may soon face superintelligent AI systems that exceed human capabilities across every measure. Leading AI researchers have revised their timelines dramatically closer, warning that such systems could emerge in the near future. The breakneck speed of recent developments – from mathematical reasoning to strategic deception – only reinforces these concerns. While it's technically possible to implement safeguards against potentially catastrophic outcomes, we currently lack any legally binding requirements for companies to do so. This leaves humanity's long-term interests at the mercy of corporate judgment calls and voluntary commitments – a precarious position given the existential risks involved.
Moreover, the absence of federal legislation creates a regulatory vacuum that could lead to a patchwork of state laws, making compliance more complex and costly for businesses while potentially leaving gaps in consumer protection. This regulatory uncertainty also disadvantages American companies in the global market, where clearer rules in other jurisdictions may become de facto standards.
The 118th Congress's approach to AI legislation – producing reports that essentially kick the can down the road – represents a failure of leadership at a crucial moment in technological history. The pace of AI development won't slow to accommodate political timidity.
It’s not too late. The reports that Congress wrote were a step in the right direction, and they acknowledge that Congress has more work to do. As we enter 2025, the need for comprehensive AI legislation becomes more urgent, not less. The next Congress must move beyond studies and recommendations to concrete action. The cost of further delay isn't just measured in missed opportunities for regulation – it's measured in the erosion of personal and societal safety, security, and trust, in the unanticipated risks that may lie in store in our increasingly AI-mediated world.
A new paper on Best-of-N Jailbreaking demonstrates the inherent unreliability of reinforcement-based alignment
After two years of congressional deliberation on artificial intelligence, we need more than careful analysis
An op-ed in Tech Policy Press on AI's likely effects on musicians and how to guide them in a better direction.