Digital doppelgängers: How sophisticated impersonation scams target content creators and audiences

Digital doppelgängers: How sophisticated impersonation scams target content creators and audiences

Content creation is no longer niche. Over 50 million Americans earn income by making videos, livestreams, podcasts, or other digital media. Many are full-time creators, while others pursue it as a side hustle. Either way, having an online presence is becoming increasingly risky.

Scammers are catching on.

In 2024 alone, the Federal Trade Commission's logged impersonation scam reports showed a total loss of almost $3 billion.

The scams are getting smarter and more personal. Criminals no longer rely on awkward emails or broken-English messages. Today, they copy creators’ voices, faces, and personas. The result is something far more convincing and dangerous.

With smart habits and the right tools, however, creators and their audiences can protect themselves. In this article, Heimdal has laid out steps you can take to stay ahead of the scammers.

Real faces, fake messages

Imagine this: Your favorite YouTuber sends a heartfelt video asking for donations to support their next project. Or your go-to podcaster leaves a voice message saying they need urgent help covering medical bills.

Except it’s not them. It’s a scam. It sounds convincingly real and it’s a growing pattern.

Fans are paying the price

Creators aren’t the only ones at risk. Their followers often end up footing the bill.

The FTC reports that Americans over 60 are especially vulnerable. In the past few years, scams targeting this age group have become more frequent and expensive. Reports of losses over $10,000 have quadrupled since 2020, and losses above $100,000 jumped from $55 million in 2020 to $445 million in 2024.

The scam typically begins with a familiar voice. The victim gets a voicemail, an Instagram DM, or a personal video. The voice sounds like someone they know, uses their first name, and ends with an urgent ask, such as, “I need you to wire this. Today.”

And so they do.

The scam is bigger than social media

These tactics don’t stop at creators. Scammers now target the public under the guise of government officials.

In 2023, Americans lost $76 million in cash to fraudsters pretending to be from the IRS, the Social Security Administration, or local law enforcement. By early 2024, those losses had already reached $20 million.

The script doesn’t change much: a voicemail from an “agent,” a warning about legal trouble, and an urgent request for payment via gift card or wire transfer. The tone is authoritative. The names sound familiar. The voice? Believable enough to trick thousands.

These impersonations don’t require cutting-edge tech. Just enough detail to sound official and by instilling enough fear to make people act fast.

FBI alerts and high-profile impersonations

The problem grew so concerning that in May 2025, the FBI issued a public alert. Scammers were now impersonating high-level U.S. officials using cloned voices in phone calls and voicemails.

One fake version of Secretary of State Marco Rubio contacted multiple foreign ministers. Another impersonated White House Chief of Staff Susie Wiles to request money and pardon lists. Some messages came through apps like Signal, while others were old-fashioned voicemails.

These cases made national news, but the strategy wasn’t new — it had just escalated to a higher level.

If someone can convincingly fake a U.S. senator, imagine how easily they can copy a mid-tier content creator with fewer digital safeguards.

Businesses are getting hit, too

It’s not just individuals. Businesses are bleeding money to voice scams as well.

In early 2025, employees at Ferrari and WPP were tricked into wiring money by what sounded like their CEOs. One case involved a deepfake voicemail sent to a finance department. Another used a fake video call to approve a payment.

The lesson is clear: people trust what they recognize and scammers know this.

Regulation is catching up, but slowly.

In 2024, the FTC introduced the Impersonation Rule, giving the agency new power to shut down websites pretending to be government agencies. Within a year, it had already taken down 13 fake FTC websites.

But that protection doesn’t extend to individuals. Creators and private citizens still fall through the cracks.

Representatives from every state are working on legislation to target digital impersonation and AI-based deception. However, platform enforcement varies. Some social media companies are rolling out watermarking tools or content provenance features, while others still rely on manual reporting.

Until laws catch up and platforms standardize protection, creators remain vulnerable.

What actually helps

To stop this problem from spreading, there are three layers of protection: behavioral habits, technical safeguards, and institutional change. The first of these three layers involves implementing small but intentional habits that can help protect both creators and consumers of media.

For creators:

  • Avoid posting raw audio or video that could be used to clone your voice.
  • Use two-factor authentication on every account — even ones you rarely use.
  • Watermark your content (visibly or invisibly) to make it harder to repurpose.
  • Set up safe words or callback protocols with collaborators, managers, or editors.
  • Subscribe to content monitoring services that flag impersonation attempts.

Some creators now preemptively tell their audiences, “I’ll never DM you asking for money,” or “Here’s how you can verify a message is really from me.” These small disclaimers help train fans to think critically.

For audiences:

  • Be wary of urgency. Scammers often create artificial time pressure.
  • Verify requests. Don't trust links or DMs — use official channels.
  • Look for red flags. Robotic tone, strange pauses, or weird phrasing indicate something's off.
  • Ask specific questions. A real creator can answer things an AI can't fake.
  • Report suspicious messages to IC3.gov or ReportFraud.ftc.gov.

Behind the scam: the psychology of trust

Why do these scams work? Because people want to believe.

Fans trust creators they’ve followed for years. Employees follow directions from executives without second-guessing. Parents answer urgent voicemails from someone they believe is their child.

Scammers don’t need flawless tech. They just need the victim to hesitate and to wonder, “What if this is real?”

That uncertainty is enough to crack the door open.

What platforms and regulators can do

To close that door, platforms must build smarter guardrails. That means:

  • Auto-detecting cloned voice or video uploads
  • Flagging sudden changes in account behavior
  • Making it easier for creators to verify themselves
  • Giving users clearer ways to report impersonation

Governments can help by expanding laws to include private citizens, not just agencies or businesses. They can also partner with platforms and cybersecurity firms to track scam trends and flag widespread campaigns early.

The bottom line

Both the creator economy and AI impersonation scams run on trust. Impersonation is no longer just a celebrity problem or a niche crime. It’s affecting everyday creators, their fans, and the businesses around them. It’s hitting wallets, reputations, and relationships.

And it’s not going away on its own.

The good news? There are real steps people can take to mitigate risk. Protect your content and question suspicious messages. Verify, don’t assume. Share information with your audience before the scammers do.

Whether the impersonator sounds like your favorite streamer or the Secretary of State, the playbook is the same. So is the fix: don’t trust the voice without checking the source.

This story was produced by Heimdal and reviewed and distributed by Stacker.