The Cost of Doing Business in an AI World

December 12, 2024

Last week, the nation was shocked when a young man assassinated the CEO of UnitedHealthcare (UNH), in plain sight, on the streets of Manhattan. In the days that followed, as a manhunt ensued for the individual responsible, facts came to light about a crime that appears motivated by fury with the healthcare system and its willingness to deny coverage for lifesaving medicine in the name of maximizing profits.

Even as I sit here, coverage of the event is interspersed with what seems like endless first-person TikTok accounts of egregious healthcare denials.

Let me preface the rest of this to say: I’m not here to assign blame. This is an emotionally and factually complicated story, and many questions remain unanswered.

But we do know one thing for sure: the role of AI as the technological culprit is undeniable. And the human complicity in allowing AI to run roughshod over unsuspecting subscribers with little oversight must be addressed.

Here are the facts, as variously reported by the press, claimed in several court cases, and even concluded in a Senate report before the incident. (1) UNH approved and deployed an “auto-authorization model” - an AI model to review health coverage; (2) this coincided with a dramatic uptick in coverage denial rates; (3) the AI’s increased denials led to a large-scale lack of coverage for individuals who, but for the use of AI to assess coverage, would have received it.

To be clear, there is nothing (so far) to suggest that the role of AI in denying claims directly affected this particular individual’s disgruntlement. Nonetheless, it took an extreme event garnering national attention to induce mass ire about the increased denials of healthcare coverage, and in turn draw attention to AI’s role - not just at UNH, but other providers as well.

In the case of UNH, there is evidence that the powers that be actually knew that the AI was making what they call “mistake” denials, and that those denials “enhanced profitability.” The Senate report from October, for example, found that UNH (and others) had been using “AI-powered tools” to deny coverage to Medicare subscribers. Several court cases are pending to that effect, claiming that UNH knew AI was denying coverage at nearly twice the rate as pre-AI program deployment.

But when it comes to AI’s role, that’s beside the point. Whether they knew and did nothing to rectify it as it benefited their profits, or whether the AI was simply malfunctioning unbeknownst to the company, does nothing to change the fact that the use of AI had a direct adverse effect on countless Americans.

Which raises the question: how much, and how often, are we as a nation willing to acquiesce to the market pressures to implement AI without thinking critically about the risks – particularly when the stakes are this high?

To be clear, healthcare corporations are not alone in their eagerness to benefit from the efficiencies of artificial intelligence. The banking, housing, and employment sectors are just a few examples of other industries that are all shifting increasingly towards the use of AI to streamline their daily operations and cull “wheat from the chaff” when it comes to this or that applicant. Often their rationale is that AI makes a more efficient process for the company and a better experience for the consumers even as it makes the company more profitable - a win-win.

That’s not a problem in and of itself. But when the companies use AI to profit at the expense of human life (again, knowingly or not), it’s a problem.

Even without AI, patients may not know why exactly they’ve been denied coverage, whether the denial was correct, or how to appeal the decision. This has always been the case, but it is exacerbated by the black box nature of AI.

The problem is worsened by the fact that patients may not even know that AI is being used to approve or deny coverage, because the transition from human analysis to AI analysis often happens with little or no notice to a company’s subscribers. Patients who assume that their denial was the result of a thoughtful calculus by a fellow human being might feel differently about contesting an automated determination.

Would a subscriber be more inclined to appeal a denial they know was made by an inscrutable AI technology instead of a human being? Would it further make a difference if that subscriber knew that (as alleged in one of the lawsuits against UNH) 90% of the algorithm’s recommendations had been reversed on appeal?

It may, or it may not – but subscribers deserve to know where their denials are coming from and to have the opportunity to challenge them.

We can start with transparency from the user perspective - let the customer know at the outset when they are being evaluated using AI. Some online sites already do this. For example, many job application sites announce that they use AI to screen for applicants that pass muster according to their rubric. You can allow this or decline it. Of course, if you decline, you no longer get to apply at all, but at least you can make an informed decision.

More transparency can be useful even when it doesn’t include a formal process for opting-out. Once a subscriber is put on notice that AI is being used to consider their case, they have more of the information they need to decide whether and how to appeal in the event that their medical coverage is denied.

Beyond the user empowerment side of things, companies also need to think more critically about when a technology is ready for large-scale deployment, especially when the stakes are as high as life and limb.

As it stands now, there also needs to be human oversight of AI-based decisions. Ongoing evaluations can determine whether an AI is going off course before it reaches a level of this magnitude. Of course, it may turn out that the humans involved willfully ignored human oversight requirements. We can’t, after all, blame AI for all of humanity’s own faults any more than we can be expected to foresee every misstep we might take as technologies advance. We can, however, be more intentional in deployment, thinking critically all along the way before rushing to widespread use, even if the steps taken are piecemeal.

Everyone has a role to play here:

  • The AI labs who develop the models at the outset should test the algorithms before they unleash them and honestly describe their capabilities;
  • The industries who embrace algorithmic transition need to ensure their AI models will honor their contracts and treat customers fairly; and
  • The federal government, tasked with serving the American people, needs to act more urgently to fund AI safety research and pass legislation to put safeguards in place before things go awry.

AI, like us, isn’t perfect. It’s unfortunate that it took a tragedy to elevate the widespread effects of UNH’s AI misdeterminations to the level of national outcry. By being more thoughtful and transparent about AI deployment going forward, and with proper oversight, we can hope to avoid the next such misstep.

AI Is Lying to Us About How Powerful It Is

We have hard evidence that AI is lying, scheming, and protecting itself, but developers don’t care

December 10, 2024
Learn More
Read more

Finding the Evidence for Evidence-Based AI Regulations

There’s more science to be done, but it’s not too early to start collecting reports from AI developers

December 3, 2024
Learn More
Read more

A Playbook for AI: Discussing Principles for a Safe and Innovative Future

The most recent CAIP podcast explores four principles to address ever-evolving AI

November 27, 2024
Learn More
Read more