AI-generated junk has spread so far across the internet that even the people who run scams and trade attack tools now want it gone.

Reports indicate that cybercriminal forums and channels have filled with low-value, machine-written posts, often described in blunt terms by the very users who rely on those spaces for illegal business. The complaint sounds familiar to anyone who uses the modern web: too much filler, too little signal, and a growing sense that authentic information keeps getting buried under cheap output. In this case, the backlash comes from hackers, scammers, and other underground actors who depend on trust, reputation, and readable exchanges to keep their operations moving.

The same flood of AI slop that frustrates ordinary internet users now appears to be clogging the channels cybercriminals use to do business.

That frustration reveals something important about the broader AI era. Bad actors may exploit automation, but they still need functioning communities. When forums overflow with synthetic chatter, users struggle to spot real offers, credible advice, or serious technical discussion. Sources suggest the problem does not just annoy participants; it can also weaken the efficiency of underground markets by making it harder to separate useful information from noise.

Key Facts

  • Cybercriminals reportedly complain that AI-generated spam is flooding their forums and discussion channels.
  • The low-quality content appears to disrupt spaces used to discuss scams, cyberattacks, and illicit services.
  • The backlash mirrors a wider internet problem: too much synthetic content and too little trustworthy information.
  • Underground communities still rely on credibility and clear communication, even when their activities are illegal.

The irony cuts deep. For years, security researchers and everyday users have warned that generative AI would accelerate fraud, impersonation, and information pollution online. Now the same pollution seems to have boomeranged back into the criminal ecosystem itself. A tool built to churn out endless text may help bad actors scale some operations, but it also degrades the very environments where they recruit partners, compare methods, and build confidence in one another.

What happens next matters beyond the dark corners of the web. If AI clutter keeps eroding the usability of criminal forums, some groups may tighten access, shift platforms, or search for new ways to verify who and what they can trust. That would not end cybercrime, but it could reshape how illegal networks organize online. The bigger lesson looks harder to escape: when synthetic content overwhelms a platform, nobody wins for long—not even the people who helped normalize spam in the first place.