New report by Claudia Wilson of CAIP and Emmie Hine of the Yale Digital Ethics Center.
The proliferation of open-source artificial intelligence (AI) has triggered a contentious policy debate. Should open-source AI be considered for regulation as closed models have been? Two prevailing perspectives have emerged: one that focuses on geopolitical risk, particularly with respect to US-China competition, and one that is grounded in ideological values around open-source technology, such as innovation, transparency, and democracy. The former is broadly supportive of export controls and other regulations, while the latter opposes restrictions on open-source technology. While neither framing should be taken at face value, they do reflect legitimate tensions between promoting technological advancement and maintaining strategic advantage in an interconnected world.
Through its work with Congress, the Center for AI Policy (CAIP) has identified that US policymakers are grappling with how to reconcile these two perspectives, particularly in light of the highly advanced models released by Chinese startup DeepSeek in December 2024 and January 2025. Much public commentary takes a single perspective, perhaps with a throwaway comment acknowledging the other perspective, but there has been no attempt to consider both perspectives in a structured manner. This paper combines both perspectives into a single rubric with which to assess open-source AI policies. It then uses this rubric to analyze four different open-source AI policy proposals.
Our rubric combines three ideological considerations and three geopolitical considerations. The three ideological considerations, as identified by existing literature, are increased transparency, accelerated technological progress, and increased power distribution. The three geopolitical considerations are Chinese misuse of American open-source AI, backdoor risks from the use of Chinese open-source AI, and changes in global power dynamics depending on which country dominates in open-source AI.
There are important nuances to these considerations.
We summarize the context that is relevant to geopolitical considerations.
We use this rubric to assess four policies: two seeking to address Chinese misuse of American open-source AI, and two seeking to address potential “backdoors” when using foreign open-source AI. The first policy is expansive export controls on open model components to China. The second policy is industry-led assessment of whether individual models should be made open-source, coupled with independent audits of those assessments. The third policy requires providers of government AI services and products to audit any model components that have been based upon external open-source model components. The fourth policy is government funding for an open-source repository of audits on commonly used open-source models and frameworks.
We find that blanket export controls on all open-source AI models would likely be sub-optimal and counterproductive. Requiring every user of every open model to undergo a know-your-customer (KYC) process would be highly disruptive to the development of specific-use applications, though it would have limited impact on frontier capabilities. It would also likely have limited efficacy in mitigating misuse risks by China. Furthermore, this policy would leave domestic misuse of open-source AI entirely unaddressed. There is also a genuine risk that export controls undermine US global power by introducing friction for other countries to use American technology. Given that the marginal risk of open-source AI is unclear, it may not be worth pursuing such a disruptive policy today.
A more reasonable alternative would be to require developers of foundation models to conduct a risk assessment of each model they intend to make open source. Developers could document their investigations and provide rationale for a decision around how they have released their model (e.g., all model components available without checks; requiring academics to provide an institutional email address to access full model components for chem-bio models). Like Meta’s Frontier AI Framework (announced in January 2025), it would entail a structured examination of risk, but it would also be accompanied by independent assurance on risk assessments. A model-by-model approach, rather than blanket legislation, will likely be less disruptive to technological progress and could be more effective at mitigating risk. Furthermore, this policy would also mitigate misuse by domestic actors, unlike export controls, which focus exclusively on specific nation-states.
Regarding the “backdoor” risks of open-source AI, we find that audits of government products leveraging open-source AI could be a helpful mitigation. However, the information needed to conduct an audit could be unavailable, and in any case audit results would not typically be shared with the public. An alternative would be to create a public repository of audits of popular open-source AI models and frameworks. Such a resource would be a valuable public good and could create greater trust in open-source AI. Yet its success, as with government audits, depends on the ability to trace model components and the resources to conduct those audits.
Open-source AI is a dynamic technology and the policy space is nascent. Going forward, policymakers should continue to monitor the performance of open models relative to closed models, as well as where algorithmic innovations are originating to inform their assessment of marginal risk. For similar reasons, it would also be valuable to continue monitoring relative performance of Chinese and American models and develop more comprehensive comparative benchmarks. Future research should also include deeper investigation into open-source AI safety risks posed by non-state actors.
Read the full report here.
What AI music generation will mean for our creativity, current and future musicians, and the art in general
Inspecting the claim that AI safety and US primacy are direct trade-offs
Policymakers and engineers should prioritize alignment innovation as AI rapidly develops