Onionlinks

Onionlinks

Did You Know?

Docy turns out that context is a key part of learning.

Deepfake Surge Drives New Wave of Celebrity Impersonation Scams

A sharp rise in deepfake production is fueling an increasingly sophisticated wave of celebrity impersonation scams, according to security firm isFake.ai. Fraud groups are now using AI-generated video and audio across social media platforms and private messaging apps to deceive victims at scale.

Recent projections from DeepStrike estimate that deepfake production could surpass 8 million files in 2025 — a sixteenfold increase since 2023. At the same time, Europol has warned that up to 90 percent of online content could be synthetically generated by 2026, signaling a dramatic shift in the digital information environment.

From Isolated Fakes to Coordinated AI Campaigns

isFake.ai describes a structural transformation in how celebrity scams operate. Rather than relying on single fake accounts or crude impersonations, fraud networks now deploy multiple AI systems simultaneously.

According to the company, one AI model may gather detailed background information on potential victims, another generates realistic video or cloned voice messages, and a third adapts responses in real time based on the victim’s reactions. The result is a continuous, evolving scam campaign designed to build trust and maximize financial extraction.

“We’re seeing scams shift from isolated impersonations to coordinated AI systems that learn and adapt,” said Olga Scryaba, AI Detection Specialist and Head of Product at isFake.ai. “That makes celebrity scams more persistent and harder to disrupt.”

The firm also highlighted the emergence of so-called “persona kits” — ready-made bundles containing synthetic faces, cloned voices, and fabricated backstories. These kits significantly lower the technical barrier for scammers, enabling repeated and scalable fraud operations.

Public figures are particularly vulnerable because large volumes of legitimate footage already exist online. Scammers can easily pull from interviews, social media posts, and public appearances to create highly convincing impersonations.

Human Judgment Under Pressure

Advances in voice cloning and video synthesis have made deepfakes increasingly difficult to detect. isFake.ai warns that even trained professionals can struggle to identify manipulated content without specialized tools.

Scryaba emphasized that the issue extends beyond technological realism. “The problem is not just better fakes,” she said. “AI content is published and consumed in spaces designed for speed and emotional engagement — social feeds, reels, shorts. People scroll without stopping to fact-check.”

In such environments, synthetic content blends seamlessly into everyday media consumption. As exposure increases, skepticism decreases — and the distinction between authentic and artificial becomes harder to perceive.

A Documented Case

The company cited a recent impersonation case involving actor Steve Burton, known for his role on General Hospital. Scammers allegedly used AI-generated video and cloned voice messages in a prolonged romance scam.

According to isFake.ai’s analysis, the victim believed she was in a private relationship with the actor and transferred more than $80,000 via gift cards, cryptocurrency, and bank-linked services. The fraud was reportedly uncovered after the victim’s daughter intervened.

Technical review of the media used in the scheme revealed characteristics consistent with synthetic content, including cloned voice patterns and subtle visual inconsistencies — indicators that are often difficult to detect without forensic tools.

“The risk is no longer limited to obviously fake videos,” Scryaba said. “Modern deepfake scams rely on realism, repetition, and personalization. Victims are often targeted over weeks or months, which lowers skepticism and increases financial harm.”

Recognizing the Warning Signs

Despite increasingly convincing media, traditional scam indicators remain relevant. isFake.ai advises consumers to be cautious of:

  • Unsolicited direct messages from celebrity accounts
  • Requests involving money, secrecy, or urgency
  • Payment demands through gift cards or cryptocurrency
  • Investment or medical promotions featuring celebrity likenesses on social media

The firm stresses that public figures do not privately solicit money, relationships, or investments through unsolicited messages.

For high-stakes situations, independent verification is essential. Consumers should confirm claims through official websites, verified accounts, or trusted third-party sources. Detection and verification tools can also help identify manipulated content.

Verification as a Habit

As synthetic media becomes increasingly normalized, isFake.ai argues that proactive verification must become standard practice.

“As synthetic content becomes more common, verification has to become a habit,” Scryaba said. “The cost of assuming something is real is simply too high.”

Photo Section

Photo Section with Captions