This Monday, the Federal Trade Commission’s final rule banning fake AI reviews was enacted.
It is now an unfair business practice for companies to purchase, create, or disseminate reviews that appear to have been written by a real consumer when, in fact, they were written by AI.
Similarly, it is now against the law for companies to buy or sell “indicators of social media influence,” such as ‘likes’ or followers, to make a human reviewer appear to be more influential than they really are.
The FTC’s final rule was approved by a unanimous 5-0 vote, including support from more conservative commissioners like Andrew Ferguson (who served as chief counsel for Senator Mitch McConnell) and Melissa Holyoak (the former solicitor general for Utah).
The Center for AI Policy (CAIP) is pleased to see bipartisan support from the FTC on this critical issue.
Fake AI reviews are the first step on a long, slippery slope toward the wrong kind of AI-driven economy.
E-commerce already makes up 16% of total US retail sales, and it’s expected to increase that proportion over the next few years steadily. Online sales cast a long shadow over which products brick-and-mortar retailers can stock – if you can’t compete with online prices, then you risk losing your customers to the convenience of online shopping. Once mighty retail giants like The Gap, Bed, Bath & Beyond, and Sports Authority have gone bankrupt or closed hundreds of stores, citing the rise of Amazon as an important factor in their defeats.
In turn, most of what gets sold online is driven by algorithms acting as silent and mysterious assistants. The products that fill up your front page of purchase options are automatically selected for you based on black-box AIs that can’t explain their recommendations. You don’t know why those products appeared on your monitor, nor does Amazon - only the black-box knows.
When real human consumers write real product reviews, you can use those reviews to cross-check Amazon’s recommendations and seek out products that will actually meet your needs.
Without that human input, though, what’s to stop AI from making all of your decisions for you? Amazon is currently rolling out AI-based tools to answer your questions about those products, and hopes to have AI agents make the final shopping decisions eventually. Over the next few years, AI will shift from assisting humans to replacing humans.
AI business agents are potentially smarter, faster, and more relentless than human merchants.
Unlike real business people, AIs don’t need to sleep, they don’t forget anything, and they don’t get stuck on bad decisions to protect their egos. A successful AI agent can clone millions of copies of its software in a few days, whereas human businesspeople need years to train just a few apprentices.
That means if we build an economy where AI vendors sell AI-designed products to AI customers, we should not expect humans to remain in charge of that economy—the AIs will quite literally outcompete us. Once AIs learn to earn the money they need to purchase their own semiconductors and electricity, it’s not clear why AI will need humans at all.
This is what soft takeover risk looks like.
It is becoming increasingly possible that someone, someday, will use AI to build killer robots or hack into all of the world’s banks. But AI doesn’t need to be evil to take over the world—it just needs to be better at e-commerce than we are. If we’re not careful, AI will acquire most of the world’s wealth through perfectly legal means.
To learn more about how this might happen and how we can stop it, check out CAIP’s recent podcast interview with Dr. Michael K. Cohen, an expert on AI agents.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business