IASEAI '25: Key Takeaways from the Inaugural AI Safety & Ethics Conference

February 14, 2025

Last week, I represented CAIP at the inaugural conference of the International Association for Safe and Ethical Artificial Intelligence (IASEAI) at the OECD headquarters in Paris. The discussions were focused and urgent—many speakers emphasized that AI has so far been advancing faster than the government frameworks that are meant to mitigate its risks, so we must take proactive measures immediately to avoid these risks spiraling beyond our control.

The IASEAI concluded its inaugural conference with a ten-point call to action. It outlined key priorities for policymakers, researchers, and industry leaders just ahead of the Paris AI Action Summit, where discussions on AI governance continued with a broader and more politically diverse audience. 

While IASEAI ’25 primarily convened those already committed to AI safety, the summit underscored the deep divisions in how different stakeholders view AI regulation. Some advocate for strict safeguards, warning that an uncontrolled AI arms race could lead to catastrophic outcomes, while others argue that excessive regulation could stifle innovation, hinder economic growth, or allow geopolitical rivals to take the lead. The U.S. and U.K. notably declined to sign onto the AI Action Summit’s pledges, with U.S. Vice President JD Vance stating that overregulation could “strangle” AI development and that safety concerns should not overshadow the need to maintain American leadership in AI technology.

This divergence highlights a core challenge: ensuring AI safety cannot be just a discussion among those who already agree on the need for regulation. The debate must engage those who are skeptical of regulatory intervention—whether due to concerns about economic competitiveness, national security, or ideological resistance to government oversight. AI’s risks and opportunities are intertwined, and a balanced approach is necessary to navigate both. This might mean implementing safeguards that mitigate catastrophic risks without stifling beneficial innovations, or promoting adaptive regulation, where oversight evolves alongside AI advancements, or structured risk-tiering where high-risk applications face stricter requirements while lower-risk uses remain relatively unencumbered

As discussions shift from conferences to concrete policy decisions, the real challenge will be turning broad principles into enforceable measures. Voluntary commitments have proven insufficient in the past, and AI safety should not depend on self-regulation by the same companies pushing the boundaries of AI capabilities. Transparency requirements, international agreements, and industry standards must be structured in a way that aligns incentives rather than creating adversarial divides between safety advocates and industry leaders. 

At CAIP, we will continue to monitor these developments and support efforts to implement practical, effective measures for AI safety. We invite policymakers, industry leaders, and researchers to collaborate in shaping policies that reflect both technological progress and societal needs. Through constructive engagement and a commitment to responsible innovation, we can work toward solutions that mitigate risks while enabling progress.

AI Safety Is Becoming AI Security

Does a rose by any other name smell as sweet? 

Read more

New Analysis of AI Agents Highlights a Serious Lack of Safety Oversight

A comprehensive analysis of current AI agents reveals a significant lack of information about safety policies and evaluations.

Read more

Meta’s Frontier AI Framework

Meta's framework is a good start, but some glaring issues need improvement.

Read more