Three Questions for the Munich Security Conference

,
February 15, 2024

The annual Munich Security Conference (MSC) is one of the leading forums for discussing the most pressing challenges to international security. A central topic for this year’s conference, running February 16-19, will be AI.

The past few years have seen rapid advances in AI. Soon, leaders across industry and academia warn the technology will pose challenges on par with nuclear weapons. For global security, we need to get this technology right.

Vice President Kamala Harris will lead the US delegation to the MSC, which will include Secretary of State Antony Blinken, Secretary of Homeland Security Alejandro Mayorkas, FBI Director Chris Wray, Deputy Attorney General Lisa Monaco, and Deputy National Security Advisor Anne Neuberger, as well as Sen. Sheldon Whitehouse (D-RI) who will guide a congressional delegation.

Here are three questions with relevant answers that the Center for AI Policy will be looking to hear discussed at the MSC.

AI is evolving rapidly, and there is great uncertainty about how exactly its risks will manifest. Still, experts agree AI risks will be significant, and governments must be ready to respond to them. How can governments become more adept in assessing and responding to AI risks?

Center for AI Policy (CAIP) thinks it is vital to develop the government’s technical capacity.

CAIP believes these three policy moves would be positive steps forward.

First, ensure more leading AI researchers are brought into government, where they can help policymakers better understand developments in AI. Second, resolve that policies ensuring AI safety are seen and supported as a national security priority. Third, support the efforts of AI Safety Institutes such as those in the US, UK, and Japan, which will strengthen understanding of AI risks, develop standards, and provide a means for international coordination.

As US companies build increasingly powerful AI models, adversaries will increasingly try to steal them. Currently, US companies are far from having the security to defend against well-resourced adversaries. Should governments support AI companies in bolstering their security?

The Center for AI Policy (CAIP) believes it is urgent that frontier AI developers and governments work together to implement the best security measures available now and invest in R&D to defend against well-resourced attackers as soon as possible.

No government or developer can achieve top security alone. Governments and developers will need to work together to safeguard frontier AI.

One concern with AI is that competitive pressures will lead AI developers to keep making AI systems more powerful, even when the reliability of those systems is in question. Do you agree we need to ensure our most advanced AI systems are reliable, and if so, how can we do that?

The Center for AI Policy (CAIP) thinks it is critical that AI systems are only developed and deployed only when the risks are manageable. For our most advanced AI systems, this means developers need practices for ensuring reliability before deploying.

Governments need to coordinate internationally on standards for risk management to make this possible.

Additional information on the 2024 Munich Security Conference agenda:

https://securityconference.org/en/msc-2024/

YouTube Channel of 2024 Conference panels and discussions:

https://www.youtube.com/@MunSecConf

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more