Earlier this week, the Biden Administration moved to enforce its export controls on advanced semiconductors by pressuring allies to stop servicing the chipmaking equipment that has already been sold to China. This prompted Paul Triolo, a prominent business strategist, to complain in Politico that these export controls are a real solution to a hypothetical problem. As he put it, “There’s this weird obsession with the idea that somehow, whichever country gets to some advanced machine intelligence, also called AGI or artificial general intelligence — that’s going to give a country an edge, either economically or politically or militarily.”
Mr. Triolo isn’t alone in his skepticism about AGI – McKinsey published a report last month claiming that AGI is still decades away. According to McKinsey & Co., AGI is still “purely theoretical” because computers can’t yet pass the “Turing test,” an exercise from 1950 where judges try to guess whether a stream of real-time text communications are coming from a human or a computer.
McKinsey usually does good research, but in this case, they’re just plain wrong – AI started passing Turing tests for sound effects in 2016, for photos in 2022, for conversation in 2023, and for behavioral economics games in 2024. The reason people are panicking about deepfakes is that AI is routinely generating full-color, high-resolution videos of things that never happened. People mostly can’t tell that the videos are fake: that’s what makes them dangerous. You can argue that today’s AIs aren’t “really” thinking, but that’s mostly a matter of moving the goalposts backward every time AI scores a goal – as you sit here and read this, AI is currently doing almost all the things we used to say that a mere unthinking machine would never be able to do. AI can drive your car, beat you at poker, diagnose your cancer, write poetry on demand, correctly interpret your facial expressions, and design new skyscrapers, complete with a list of the components needed for construction and their costs. The future has already arrived.
It’s possible that part of the confusion comes from the awkwardness of the term “artificial general intelligence.” How general does an intelligence have to be to count as AGI? What does it mean to be “human-level?” There aren’t great answers to these questions, but it might help to think about what it would mean to have “genius-level AI” widely available. Suppose you have access to a software program that’s roughly as good at laying out AutoCAD blueprints as a genius-level human architect. Even if that software isn’t “really” thinking, shouldn’t we still expect that software to revolutionize the construction industry?
If the Pentagon has genius-level military AI, then it’s still classified – but there’s no special reason the world’s major powers couldn’t get there within the next five to ten years. Within a generation, ordinary progress on the kinds of software that’s already being sold will almost certainly result in AI systems that can offer genius-level advice. You’ll be able to press a button and get the kind of insights that would normally require hiring a few thousand genius-level colonels, genius-level engineers, genius-level hackers, and genius-level diplomats. This is more than just a way to “give a country an edge,” as Triolo puts it. It’s a total paradigm shift. We should be getting ready now to handle the safety implications of that kind of brainpower.
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting
Reclaim safety as the focus of international conversations on AI