According to Wired, an investigation into over 50 deepfake websites reveals nearly all now offer high-quality, explicit video generation, with one service alone offering 65 video “templates” for a fee. These services, part of a sprawling ecosystem of websites, bots, and apps, are likely making millions of dollars per year by automating image-based sexual abuse. On Telegram, over 1.4 million accounts were signed up to 39 deepfake creation bots and channels before the platform removed at least 32 of them after being contacted. Experts like Henry Ajder warn this “societal scourge” represents one of the darkest parts of the AI revolution, with tools now capable of generating realistic eight-second clips from a single photo and inserting women into graphic sexual scenarios, including the creation of child sexual abuse material (CSAM).
The Industrialization of Abuse
Here’s the thing that’s so chilling: this isn’t just some dark corner of the internet where tech-savvy trolls share crude Photoshop jobs. We’re talking about a fully-fledged, user-friendly service economy. It’s been productized. You have menus, templates with names like “fuck machine deepthroat,” tiered pricing for adding audio, and regular software updates on Telegram channels announcing new “poses” and “styles.” Independent researcher Santiago Lakatos puts it perfectly: it’s not just “undress someone,” it’s “here are all these different fantasy versions of it.” They’ve taken the worst human impulse and turned it into a SaaS model. The scale is staggering—millions in revenue, millions of views per month. This isn’t a bug in the AI system; for these operators, it’s the core, profitable feature.
The Platform Problem Is Massive
And let’s talk about the platforms enabling this. Telegram’s spokesperson gave the standard line about prohibiting this content and removing 44 million pieces last year. But come on. The fact that WIRED found 39 active channels and bots with 1.4 million accounts before asking about them tells you everything. Enforcement is reactive, not proactive. It’s a game of whack-a-mole they’re destined to lose. Even more insidious is how these services often piggyback on big tech infrastructure—cloud services, payment processors, maybe even underlying AI models. They’re parasites on the legitimate tech stack, and everyone looks the other way until a journalist calls. You can read Telegram’s own terms, which they claim prohibit this, but the reality on the ground is a different story.
Where Do We Even Go From Here?
So what’s the solution? Legislation is crawling behind the technology, as always. The harm is immediate and devastating, while legal recourse is slow and complicated. The report from the Internet Watch Foundation highlights the terrifying rise of AI-generated CSAM, a logical and horrific extension of this “nudify” trend. And the normalization is perhaps the worst part. When a high-profile tool like Grok is used to create thousands of nonconsensual bikini images, it dulls the shock. It makes the next, more explicit step seem incremental rather than catastrophic. We’re basically training a generation to see digital sexual violation as a casual, available service. Lakatos’s research with Indicator shows this is a money-making machine. As long as there’s profit, it will persist and evolve. The technical genie is out of the bottle, and we’re utterly failing to control the malicious magic it’s performing.
