Who’s Actually Working on Safe AI at Microsoft?

Jason Green-Lowe
,
May 3, 2024

Who's actually working on safe AI at Microsoft?

In its new AI transparency report, Microsoft brags about expanding its responsible AI team from 350 to 400 people. What do those numbers actually mean in context?

Microsoft claims that "more than half of these people focus on responsible AI full-time," meaning that almost half have only part-time or occasional duties related to responsible AI. Let's say we're talking about 300 full-time equivalent positions. According to glass.ai, Microsoft had 7,133 AI employees in March 2023; presumably, that number has only grown. So, at most, we're talking about 4% of Microsoft's AI workforce who is working on responsibility.

Responsibility, in turn, covers an extensive range of topics. For example, the responsible AI team has been "accelerating scientific discovery in natural sciences through proactive knowledge discovery, hypothesis generation, and multiscale multimodal data generation." It's very prosocial of Microsoft to help out the world's biologists and chemists, but that's not quite the same thing as making sure that Microsoft's own AI is safe.

Similarly, a large portion of the team is responsible for basic quality control to make sure that Microsoft's products work. As Microsoft notes, "Our researchers have developed a number of tools and prototypes to assess AI-generated outputs and improve our products. These include an Excel add-in prototype that helps users assess AI-generated code, a case study of how enterprise end users interact with explanations of AI-generated outputs, and research on when code suggestions are most helpful for programmers."

There's nothing wrong with that kind of investigation, but it sounds much more like "market research" than "safety research." Taking credit for "responsible AI" when you're really just checking to see if your customers are getting value out of your auto-suggest features is a bit of a stretch.

The words "disaster," "catastrophe," "extreme," "weapon," "damage," "danger," "virus," and "nuclear" do not appear at all in Microsoft's 40-page report. The word "hack" appears only once – as a positive reference to a hackathon where new code is developed.

If Microsoft is interested in protecting against the extreme risks from advanced AI, then as part of its commitment to transparency, maybe it should include some more information about who is working on that kind of safety and what they're doing to protect the public.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more