The scam pitch looks polished, familiar, and dangerously believable: a celebrity face, a casual interview clip, and a promise that sounds too easy to ignore.

According to Copyleaks, scammers have used AI-generated videos of celebrities including Taylor Swift and Rihanna to push dubious services on TikTok. The company says these ads often place stars in settings viewers already trust—red carpets, podcasts, and talk shows—then manipulate real footage with AI to make the endorsements appear authentic. The result blurs the line between fan content, advertising, and outright fraud.

What makes these clips effective is not just the celebrity likeness, but the illusion of a real recommendation delivered in a familiar setting.

Reports indicate many of the ads promote rewards programs that claim users can unlock money, perks, or special benefits. That formula taps into two powerful online instincts at once: trust in recognizable faces and the lure of fast, low-effort gains. On a platform built for speed and endless scrolling, viewers may not stop long enough to question whether the person on screen ever said those words at all.

Key Facts

  • Copyleaks says scammers are using AI-generated celebrity videos in TikTok ads.
  • Taylor Swift and Rihanna appear among the celebrities referenced in the findings.
  • The clips often mimic interviews, red carpets, podcasts, or talk-show appearances.
  • Many ads reportedly promote questionable rewards programs and similar offers.

The rise of these clips points to a broader shift in online deception. Deepfake scams no longer rely on crude impersonations or obvious editing errors. They borrow the look and rhythm of legitimate media, then slide fraudulent pitches into that familiar frame. For platforms, that creates a harder moderation problem. For users, it means old rules—like trusting what looks like a real video—no longer hold up.

What happens next will matter far beyond one app or one celebrity. As AI tools get cheaper and faster, reports suggest scam campaigns will only grow more convincing and more targeted. That puts pressure on TikTok and other platforms to catch manipulated ads earlier, label synthetic media more clearly, and shut down repeat offenders before fake endorsements turn into real financial harm.