5 Horror Movie Tropes that Explain Trends in AIÂ
Like Halloween, AI can have you feeling some combination of titillated and terrified. With ChatGPT breathing and screaming like a human, potentially a billion people losing their jobs, and experts warning of AI doomsday, you’d be justified in turning on all the lights in your house and arming yourself to the teeth. To celebrate the season, here are five horror movie tropes that explain current issues in AI.Â
One of the greatest concerns in AI is that we’ll see a sudden leap in capabilities, and AI will go rogue and act against humans before we can figure out how to contain it. Some have called for an international pause on AI development until we know how to make AI safe. As a CAIP report explains, advanced AI systems have already outpaced their developers’ understanding. If we fail to prepare for an abrupt change with AI, Leatherface is going to get a free shot on our noggins.Â
Last month, driven by growing AI power demands, the Department of Energy announced a $1.5 billion loan guarantee to revive the Holtec Palisades nuclear plant in Michigan, and Microsoft and Constellation Energy unveiled a $1.6 billion deal to restart a dormant reactor at Pennsylvania's Three Mile Island plant. Is our sense of foreboding from the mutant horde from Chernobyl Diaries? A minivan full of NIMBYs? Nope - it’s the coming wave of data centers devouring energy like a zombie with a brain sandwich.Â
US Energy Secretary Jennifer Granholm recently said “We basically have to double the size of our electric grid,” to provide the energy needed for AI. So we’re not just snooping around the ol’ plant, we’re moving in with the kind of mortgage that makes you ignore blood seeping out of the walls. (FYI - the average nuclear power plant has a regulatory burden of about $8.6 million annually. Meanwhile, Congress has yet to enact even basic safety requirements for AI systems that might be more dangerous.)Â
The scientists who achieved breakthroughs in AI, including Yoshua Bengio and “godfather” of AI Geoffrey Hinton, have warned that unchecked AI advancement could result in the extinction of humanity. It’s a classic horror scenario: the scientist can’t control his creation; the medium can’t unconjure the spirits; and the sign on Annabelle’s case and the doll’s facial expression are sending us a clear warning, yet trillions of dollars will be invested in AI development without ensuring that effective safeguards are in place.Â
With the European Union’s singular action to ban harmful AI practices and establish safety requirements via the EU AI Act, the EU is effectively heading down the creaky basement steps on its own. The rest of the world is waiting upstairs, hoping for the best. There’s strength in numbers, but the lack of US leadership and concerted global action on AI safety policy is resulting in disjointed, lackluster efforts (not unlike the sequels in the Saw franchise). California tried to step up. Its legislature passed a strongly bipartisan bill with basic safety requirements for AI models that cost at least $100M to train, but Governor Newsom cut it down following a nefarious misinformation campaign. We need a hero to rally the people like Ash from Army of Darkness, but so far all we have is Ash from Alien, an android that sells out the humans for corporate profits.
Sam Altman has a supernatural ability to overcome adversity. So far, he’s been hit with safety questions raised by Senators, a complaint from Scarlett Johansson about stealing her voice, a fraud lawsuit from Elon Musk, and the disbandment of not one but two of OpenAI’s internal safety teams after key personnel resigned in protest. Even OpenAI’s board tried to stop him, but their attempt was no more effective than six bullets and a hard fall off a balcony against Halloween’s Michael Myers. Sam returned. OpenAI’s computing power continues to grow. And we expect several more installments in this franchise.Â
(Pictured above: Altman, explaining how he “Michael Myers-ed” SB 1047 and OpenAI’s safety team.)
While the rules for surviving AI aren’t as clear-cut as the rules for surviving a horror movie, the Center for AI Policy has comprehensive legislation and several narrower proposals to improve AI safety. With all due respect to John Carpenter’s iconic ending to The Thing (1982), the answer cannot be that we just wait and see what happens. We’re trying to prevent the horror, not just make it to the sequel.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business