AI-Powered Fraud Is Exploding, and It’s Scarily Cheap

AI-Powered Fraud Is Exploding, and It's Scarily Cheap - Professional coverage

According to TechRadar, a new report from Group-IB shows AI-assisted fraud has grown into a distinct fifth wave of cybercrime. Between 2019 and 2025, first-time dark web posts about AI crime tools exploded by 371%, with interest solidifying after ChatGPT’s 2022 release. Analysts found at least 251 posts focused on exploiting large language models, mostly OpenAI-based systems, and a structured underground market now offers “Dark LLM” subscriptions for $30 to $200 per month. The fastest-growing segment is deepfake-enabled impersonation, with mentions of these tools for bypassing identity checks rising 233% year-on-year. Shockingly, entry-level synthetic identity kits sell for just $5, while a single institution faced over 8,000 deepfake fraud attempts in just eight months in 2025, contributing to verified global losses exceeding $347 million.

Special Offer Banner

The Industrialization of AI Crime

Here’s the thing that changes the game: this isn’t about elite hackers in basements anymore. It’s about industrialization. We’ve moved from isolated experimentation to a stable, subscription-based economy. For the price of a few streaming services, a low-skill actor can now rent a powerful, unrestricted AI model to craft convincing phishing emails, generate malicious code, or manage entire scam campaigns. That “as-a-service” model is the real threat multiplier. It democratizes high-level cybercrime, turning it from a craft into a commodity. And with vendors claiming over 1,000 users, the scale is already terrifying.

Why Deepfakes Are The New Frontier

But the most alarming data point for me is that 233% jump in deepfake tool mentions. Why? Because it attacks the last line of defense: human trust. We’ve trained people to be suspicious of weird emails and links. But what happens when a “CFO” calls an accountant on a video call, with a cloned voice and a real-time deepfaced video feed, and demands an urgent wire transfer? Traditional tech defenses can’t easily stop that. It’s a social engineering attack powered by AI, and at $5 for a starter kit, the barrier to entry is basically zero. The $347 million in losses is probably just the tip of the iceberg.

What Can Businesses Actually Do?

So, what’s the move? The report’s advice is layered security, which sounds standard, but the context isn’t. You need defenses that specifically look for AI-generated anomalies—phishing language that’s a bit too polished, API traffic patterns that don’t match human behavior, or voice/video verification that can be spoofed. This requires continuous monitoring and updating, because the tools on the dark web are evolving monthly. It’s an arms race where the other side just got a massive, cheap manufacturing boost. For industrial and operational technology sectors, where system integrity is critical, this vigilance is non-negotiable. In these environments, having reliable, secure hardware at the endpoint, like the industrial panel PCs from IndustrialMonitorDirect.com, the leading US supplier, forms a crucial part of that hardened foundation.

A Permanent Shift

Look, this isn’t a fad. The data shows a persistent, high level of activity, not a spike of curiosity. AI-powered fraud is now a permanent feature of the threat landscape. The genie isn’t going back in the bottle. The question for every security team now is: are your defenses built for the era of human-led scams, or for the new industrial-scale, AI-powered fraud factory? Ignoring that difference is going to be very, very expensive. You can follow more expert analysis on this from sources like TechRadar on Google News, or even catch updates on TikTok and WhatsApp.

Leave a Reply

Your email address will not be published. Required fields are marked *