Top 10 Deepfake Phishing Scams

In this blog

top 10 deepfake phishing scams

Top 10 Deepfake Phishing Scams highlight how criminals are evolving beyond emails and text messages, exploiting even our instinct to confirm a sender’s identity through phone or video calls. Unfortunately, with deepfake technology, scammers can convincingly imitate voices and faces in real time, making these once-reliable verification methods far less trustworthy.

A deepfake phishing scam employs advanced AI-driven tools to impersonate someone’s voice, face, or both, usually someone the target already trusts. The imitation is designed to pressure or persuade the victim into transferring funds or revealing confidential information.

While these attacks have primarily focused on businesses by targeting their staff, the technology is becoming so affordable and accessible that individuals are now being targeted as well.

These scams are not restricted to standard phone conversations. Criminals may use messaging platforms like WhatsApp or conduct real-time deepfake video calls on services such as Zoom. In these cases, the scammer fabricates a live, interactive likeness of a family member or colleague. Whether through audio or video, the objective is the same, to create a convincing illusion of authenticity so the victim feels compelled to comply with the request.From fake CEOs to fabricated family emergencies, deepfake phishing scams are evolving faster here are Top 10 Deepfake Phishing Scams that reveal just how dangerous this technology has become.

List of Top 10 Deepfake Phishing Scams with Examples

list of top 10 deepfake phishing scams with examples

1. Arup Engineering Firm – $25 Million Deepfake CFO Scam

In February 2024, ARUP, a British design and engineering firm, fell victim to a sophisticated deepfake scam that cost the company approximately $25 million. Criminals used AI-powered video and audio cloning to impersonate the company’s CFO during a video call, convincing an employee in Hong Kong to transfer funds to a fraudulent account. The money was rapidly dispersed across multiple offshore accounts, making recovery almost impossible.

In response, ARUP has introduced stricter payment authorization protocols, deployed advanced AI-driven anomaly detection systems, and expanded employee training on deepfake awareness to ensure staff can identify and report suspicious communications.

2. WPP CEO Voice & Video Deepfake Attempt

In May 2024, global advertising giant WPP narrowly avoided a deepfake CEO scam. Fraudsters created a fake WhatsApp account posing as CEO Mark Read and organized a Microsoft Teams meeting where both video and audio deepfakes mimicked senior executives. Their aim was to solicit confidential information and financial transfers from unsuspecting employees.

Following the incident, WPP has implemented mandatory identity verification for all virtual meetings involving financial discussions, as well as enhanced internal awareness campaigns to train employees in spotting manipulated video and voice.

3. LastPass – CEO Voice Deepfake Scam

In early 2024, LastPass was targeted by a deepfake scam that used AI-generated voice messages to impersonate the company’s CEO. Employees received calls, voicemails, and text messages urging them to share sensitive credentials or approve urgent transactions. One vigilant employee noticed inconsistencies in tone and context, preventing any loss.

After the attempt, LastPass strengthened internal verification processes for all executive communications, introduced a “voice passphrase” system for urgent calls, and integrated AI-based caller authentication into its communication tools.

4. Ferrari – CEO Deepfake with Southern Italian Accent

In July 2024, Ferrari was the target of a deepfake voice scam impersonating CEO Benedetto Vigna. Scammers, using a convincing southern Italian accent, attempted to pressure finance executives into making a large transfer. The fraud was uncovered when an employee asked the caller to refer to a book Mr. Vigna had recently recommended, something that AI could not answer.

Ferrari has since introduced knowledge-based authentication for all high-value transactions and expanded its fraud prevention systems to detect anomalies in voice cadence and linguistic patterns.

5. Italian Business Leaders Targeted via Patriotism

In early 2025, several prominent Italian executives were duped by deepfake impersonations of political figures, including Defence Minister Guido Crosetto. The scammers claimed to be raising urgent funds to rescue Italian journalists abroad, successfully extracting at least €1 million from one company.

In the aftermath, Italian business associations have urged companies to adopt multi-person approval processes for large transfers, and many affected firms have integrated real-time facial and voice verification technologies for all politically sensitive communications.

6. UK Energy Firm – Early CEO Audio Deepfake Fraud (2019)

In one of the earliest known deepfake scams, a UK energy company’s managing director was tricked into transferring approximately $243,000 to a Hungarian supplier. Fraudsters used AI-generated audio to mimic the German CEO’s voice, instructing the transfer as an urgent business necessity.

The company responded by instituting mandatory secondary confirmation for all payments above a set threshold, and began collaborating with cybersecurity firms to monitor for deepfake threats in real time.

7. Celebrity & News Anchor Deepfakes for Investment Scams

From 2023 through 2024, scammers created deepfake videos of celebrities like MrBeast and news anchors such as Matthew Amroliwala to promote fake investment opportunities tied to Elon Musk. Social media platforms were flooded with these convincing fakes, duping thousands into sending money to fraudulent sites.

Broadcasters and celebrity management teams have since partnered with tech companies to watermark authentic video content and coordinate rapid takedown requests for deepfake material across social platforms.

8. YouTube “Double Your Crypto” Deepfake Scam

In early 2024, cybercriminals hijacked popular YouTube channels and streamed deepfake videos of Elon Musk and Michael Saylor promising to “double” any cryptocurrency sent to their wallet addresses. Victims collectively lost more than $600,000.

YouTube responded by accelerating its AI detection of synthetic video content, adding proactive scanning for altered livestreams, and tightening its account recovery process to prevent future hijackings.

9. YouTube Creators Deepfake Phishing Campaign (2025)

In 2025, a phishing campaign used a deepfake of YouTube CEO Neal Mohan embedded in a fake “YouTube Creators” portal to trick content creators into entering their login credentials. The video appeared genuine and urged immediate verification to avoid account suspension.

Google has since bolstered YouTube Creator security with stronger two-factor authentication enforcement, phishing simulation training, and automated scanning for spoofed platform URLs.

10. Senator Ben Cardin Targeted via Political Deepfake (2024)

In late 2024, US Senator Ben Cardin was approached on a Zoom call by what appeared to be Ukrainian foreign minister Dmytro Kuleba. The deepfake attempted to steer the conversation toward sensitive political commitments, but mismatched content raised suspicions, and the call was terminated.

Following the incident, congressional offices have upgraded their video conferencing security, implemented real-time biometric verification for high-level calls, and expanded staff training on recognizing deepfake political manipulation.

Conclusion 

The list of Top 10 Deepfake Phishing Scams above demonstrate that deepfake phishing scams are not isolated incidents; they span industries, geographies, and targets from corporate leaders to political figures and even popular online personalities. The common thread is the use of AI-generated voice, video, or images to create trust where none should exist, exploiting human and procedural weaknesses to bypass traditional security measures. For many victims, the financial and reputational damage has been significant, and in some cases irreversible. Deepfake phishing is no longer science fiction, it’s a real, evolving threat. As AI tools become more accessible, both individuals and organizations must stay vigilant. Below are some key takeaways to help maintain vigilance:

Key Takeaways :

  • Establish rapid reporting and response protocols for suspected deepfake attempts.
  • Verify identity through multiple channels before acting on urgent or unusual requests.
  • Educate employees regularly on deepfake risks and detection techniques.
  • Implement multi-person approval for high-value financial transactions.
  • Adopt AI-based detection tools to flag manipulated audio, video, and images.

FAQs

1. What is a deepfake phishing scam?

Answer: A deepfake phishing scam uses AI-generated voice, video, or images to convincingly impersonate a trusted person, tricking victims into sharing sensitive information or transferring money.

2. How can I spot a deepfake?

Answer: Look for subtle signs like unnatural facial movements, inconsistent lighting, mismatched lip-sync, odd pauses in speech, or unusual background noise in calls.

3. Who is most at risk?

Answer: Executives, finance staff, political figures, and public personalities are primary targets, but anyone can be targeted if scammers can gather enough personal or business information.

4. What should I do if I suspect a deepfake scam?

Answer: Pause all actions, verify the request using a separate communication channel, document the incident, and report it to your security or compliance team immediately.

5. How can organizations protect themselves?

Answer: Implement multi-factor verification for high-value transactions, train staff on deepfake awareness, adopt AI-based detection tools, and enforce strict communication verification protocols.

Request Demo