Who’s Actually Working on Safe AI at Microsoft?

Jason Green-Lowe
,
May 3, 2024

Who's actually working on safe AI at Microsoft?

In its new AI transparency report, Microsoft brags about expanding its responsible AI team from 350 to 400 people. What do those numbers actually mean in context?

Microsoft claims that "more than half of these people focus on responsible AI full-time," meaning that almost half have only part-time or occasional duties related to responsible AI. Let's say we're talking about 300 full-time equivalent positions. According to glass.ai, Microsoft had 7,133 AI employees in March 2023; presumably, that number has only grown. So, at most, we're talking about 4% of Microsoft's AI workforce who is working on responsibility.

Responsibility, in turn, covers an extensive range of topics. For example, the responsible AI team has been "accelerating scientific discovery in natural sciences through proactive knowledge discovery, hypothesis generation, and multiscale multimodal data generation." It's very prosocial of Microsoft to help out the world's biologists and chemists, but that's not quite the same thing as making sure that Microsoft's own AI is safe.

Similarly, a large portion of the team is responsible for basic quality control to make sure that Microsoft's products work. As Microsoft notes, "Our researchers have developed a number of tools and prototypes to assess AI-generated outputs and improve our products. These include an Excel add-in prototype that helps users assess AI-generated code, a case study of how enterprise end users interact with explanations of AI-generated outputs, and research on when code suggestions are most helpful for programmers."

There's nothing wrong with that kind of investigation, but it sounds much more like "market research" than "safety research." Taking credit for "responsible AI" when you're really just checking to see if your customers are getting value out of your auto-suggest features is a bit of a stretch.

The words "disaster," "catastrophe," "extreme," "weapon," "damage," "danger," "virus," and "nuclear" do not appear at all in Microsoft's 40-page report. The word "hack" appears only once – as a positive reference to a hackathon where new code is developed.

If Microsoft is interested in protecting against the extreme risks from advanced AI, then as part of its commitment to transparency, maybe it should include some more information about who is working on that kind of safety and what they're doing to protect the public.

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more