As we look ahead to 2025 and a new Donald Trump administration, significant changes appear imminent in America’s artificial intelligence (AI) policy landscape.
The next Trump administration’s approach will likely pivot away from the bureaucratic framework established during the Joe Biden administration.
The Center for AI Policy (CAIP) anticipates President-elect Trump will radically scale back the executive branch's efforts to define best practices for the AI industry. Instead, it will focus on promoting American competitiveness, particularly regarding China.
Trump’s allies in the technology sector have primarily supported this direction, arguing that reducing regulatory barriers will accelerate innovation.
CAIP sees four top-line implications of this AI policy realignment, all of which will be substantial.
Trump has pledged to repeal Biden’s AI executive order, which currently sets standards on AI safety, transparency, and competition. The Republican Party platform calls for Biden's order replaced by "AI development based on free speech and human flourishing."
Depending on what, if any, measures the Trump administration adopts to replace Biden's AI executive order, this repeal could disrupt or even eliminate the United States AI Safety Institute’s pre-deployment testing and voluntary safety protocols. While there is certainly room to improve on the Biden AI executive order, a careless or hasty repeal could negatively affect the consistency and rigor of safety testing across the AI industry.
Industry self-regulation through ad hoc voluntary commitments could become the primary governance mechanism, replacing the Biden administration's government-led search for consensus-based best practices.
CAIP expects state-level and global action on AI regulation to accelerate in the new year in response to reduced federal oversight. This trend toward state-based and global regulation will create a complex patchwork of safety requirements and guardrails for AI companies to navigate. Ironically, a prolonged absence of federal regulation may increase the total compliance burden on global AI companies.
All signs indicate that the next Trump administration will make national security and economic competition with China central themes in AI policy discussions. As part of this competitive focus, military applications of AI may receive increased attention and investment, signaling a shift in the application of AI technologies.
Although it is unclear whether Trump favors a "Manhattan Project" approach to accelerating AI development, the next Trump administration will most likely take an active role in ensuring that the US armed forces maintain technological superiority. This will be done by slashing the amount of paperwork required to move forward on new projects and steering government funding to AI projects that are seen as especially promising for American dominance.
Trump’s tech allies have emerged as influential voices shaping this vision for AI development. Their perspective typically favors minimal government intervention and emphasizes maintaining America’s competitive edge through private-sector innovation rather than regulatory frameworks.
For example, Vice President-elect JD Vance supports open-source AI development while expressing concerns about regulatory capture by Big Tech AI companies. Venture capitalist Marc Andreessen and others in the VC community have pushed for minimal regulation, while Peter Thiel has emphasized prioritizing US competitiveness against China over safety considerations.
One important exception to this trend is Elon Musk, who recently endorsed California Senate Bill 1047, which would have established important safety testing requirements for the largest AI systems. Trump's victory speech hailed Musk as "a new star" and "an amazing guy." Although Musk is enthusiastic about slashing federal spending, he also believes that AI should be regulated "just as we regulate any product/technology that is a potential risk to the public."
As we enter 2025, AI safety legislation and regulation in the United States is at a significant inflection point. The choices made about oversight, safety standards, and development priorities will have lasting implications for how this transformative technology evolves and whether it is deployed safely and securely across American society. A new Trump administration suggests there will be minimal federal AI oversight in the immediate future, with safety addressed (if at all) by states, global bodies, and industry self-governance.
CAIP conducted a survey of notable races this election cycle
The "godfather of AI" is concerned about the dangers of the technology he helped create
CAIP was featured in Politico's coverage of the SB 1047 veto decision