Last week, the Third Circuit Court of Appeals issued a ruling in the tragic case of a 10-year-old girl who accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed.
The judges distinguished between two different types of algorithmic responses:
According to the judges, a purely passive algorithm would be immune under the increasingly notorious Section 230 of the Communications Decency Act. Most other courts that have considered Section 230 issues have also found that the defendants were immune.
However, in this case, the judges refused to apply Section 230 and instead allowed the girl’s family to try to “hold TikTok liable for its targeted recommendations of videos it knew were harmful.” The Third Circuit decided that TikTok’s algorithm went so far beyond the original understanding of what Section 230 was designed to cover and that no immunity was appropriate.
Section 230 was originally designed to protect third-party bulletin board services like AOL, Prodigy, and Yahoo, which merely hosted content created by third parties and allowed interested users to download that content if they wished. An online bulletin board might censor some of the most offensive posts to create a more family-friendly environment, but early Internet providers were not in the business of recommending content to their users.
By contrast, today, it is so common for algorithms to decide on the content of our ‘feeds’ that we sometimes forget that this used to be a decision left in the hands of the reader. Most of us unconsciously consume social media for two hours a day without even knowing – much less controlling – where that content comes from.
As Cal Newport noted in his book on Digital Minimalism,
“Hundreds of billions of dollars have been invested into companies whose sole purpose is to hijack as much of your attention as possible.”
The more you watch, the more companies like TikTok and YouTube get paid by advertisers. This is a serious enough social problem when editorial decisions are made by corporate executives—but what about when the decisions are made by AI?
The AI algorithm that decided to show videos of hangings to 10-year-old girls did not know or care about child psychology and did not have any sense of shame or morality. By their very nature, artificial intelligences are amoral. They have no sense of right and wrong – all they can do is accomplish their task as efficiently as possible. If a careless trainer tells them their task is to maximize advertising revenue, they will do so with ruthless and single-minded purpose, even if that means killing some of their customers.
We should think very carefully before continuing to expand the power of these amoral AIs. They are already controlling our social media feeds, restaurant recommendations, dating suggestions, and housing prices. Before we also put them in charge of our healthcare system, the power grid, and the stock market, it is critical to have the government set some minimum safety standards to ensure that powerful AIs are trained well enough to avoid killing their customers.
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting
Reclaim safety as the focus of international conversations on AI