Assessing Amazon's Call for 'Global Responsible AI Policies'

Jason Green-Lowe
,
August 1, 2024

Last week, David Zapolsky, Senior Vice President of Global Public Policy & General Counsel at Amazon, published a blog post calling on both companies and governments to “align on global responsible AI policies.” 

At that highest level of generality, the Center for AI Policy (CAIP) wholeheartedly agrees. We do need global responsible AI policies, and those policies do need input and support from both the federal government and from large, medium, and small AI developers. Some of the steps Amazon has taken to improve its responsible AI practices, like watermarking its images and issuing model cards, are useful and appreciated.

However, when we dive a little deeper to look into what Amazon has in mind for “aligning on policies,” CAIP has serious concerns. Zapolsky writes that “companies building, using, or leveraging AI must commit to its responsible deployment.” 

What, exactly, does this commitment consist of? 

Amazon doesn't exactly have a pristine record with regard to product liability, following copyright laws, or protecting whistleblowers. CAIP is not calling out Amazon as a uniquely risky corporate actor – it’s just that shareholders of large companies generally reward executives for boosting their dividends rather than for living up to their informal commitments.

Issuing a statement about your passion for responsible AI and calling it a commitment is easy. Actually living up to such commitments is hard. 

That’s why CAIP strongly advocates that the United States government should not rely on voluntary commitments, but should instead codify requirements in legislation with real financial consequences for companies who fall short of minimum AI safety standards. 

It’s perfectly reasonable to involve major companies like Amazon in the design of those safety standards; Amazon employs many of the world’s leading experts in AI, and those experts should be consulted. But when it comes time to decide whether Amazon has successfully met those safety standards, neither Amazon nor any other trillion-dollar company should be left to serve as the sole judge in its own case. 

Instead, we need rigorous third-party evaluations to indicate whether a company has actually deployed AI responsibly. If the answer is no, then the government should have the power to step in and require the company to make its AI systems safer. 

Yesterday, the Senate Commerce Committee approved the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act and the Future of AI Innovation Act (FAIIA), which together would make sure that these objective evaluations are available. After these bills pass the full Senate and the House, the next step is to get legislation that requires that such evaluations be used by all large developers of advanced AI.

The Center for AI Policy has drafted model legislation that would achieve this goal, and we look forward to working with Amazon and all other stakeholders who truly value responsible AI to improve that legislation and get it passed.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more