The EU AI Act and Brussels Effect

Claudia Wilson
,
August 13, 2024

How will American AI firms respond to General Purpose AI requirements?

When there’s no international regulation on a sector, who sets the global rules of the road? Sometimes, domestic regulation remains domestic and companies simply adhere to local laws. However, there is a trend of American companies complying with EU regulations outside of the EU. Policy researchers have labeled this phenomenon the Brussels Effect.

This paper asks whether the requirements of the EU AI Act will lead to a Brussels Effect for American businesses. The Center for AI Policy (CAIP) finds that large American companies are likely to remain in the EU market and be generally compliant with the EU AI Act, even when operating in the US. The potential revenue in the EU will, in most cases, exceed the costs of compliance, causing most firms to remain in the EU market, though some smaller firms may choose to leave.

Most firms will choose to be compliant with the EU AI Act, because it is likely to be more profitable than running two separate compliance processes and potentially training two separate models. Many of the EU AI Act requirements implicate practices prior to or during training such that if companies wished to be non-compliant with the Act outside the EU, they would need to incur the expensive cost of training twice. There may be some small cost savings for avoiding certain EU AI requirements, such as incident tracking and cybersecurity standards; however the complexity of duplicating processes may not be worth the limited cost savings.

Despite widespread debate about the “opt-out” copyright requirement, we find that, with some EU facilitation, this is feasible and firms will likely choose to adhere to this requirement outside of the EU. While this “opt-out” requirement may limit or increase the cost of training data, it seems from agreements with media companies that US AI companies are already preparing to spend more on copyright training data, regardless of the EU AI Act. If companies were to pursue separate products for separate markets, it would be due to the evaluations requirement delaying launch dates. However, this is a tenuous incentive.

These findings may be particularly relevant to those interested in US leadership on global AI standards. If the US remains at its current political impasse regarding AI policy, its companies will be de facto regulated by the EU. To be a leader in global AI governance, the US should consider moving ahead with concrete domestic AI safety policy.

Read the full paper here.

Composing the Future: AI's Role in Transforming Music

What AI music generation will mean for our creativity, current and future musicians, and the art in general

November 14, 2024
Learn More
Read more

AI Safety and the US-China Arms Race

Inspecting the claim that AI safety and US primacy are direct trade-offs

October 29, 2024
Learn More
Read more

AI Alignment in Mitigating Risk: Frameworks for Benchmarking and Improvement

Policymakers and engineers should prioritize alignment innovation as AI rapidly develops

October 7, 2024
Learn More
Read more