Top 5 Ways Scammers Have Used AI And Deepfakes In 2025
AI is changing everything, including how scammers trick us. From fake voices to deepfake videos, here are the top ways criminals are weaponising technology in 2025.
AI tools are advancing at lightning speed, making everyday life more efficient and creative. But scammers are always close behind, weaponising the same technology to trick, manipulate, and steal.
The NCSC reports that New Zealanders are losing more than NZ$5 million per quarter to scams and fraud, with NZ$5.7 million in direct losses recorded in the most recent reporting period. Data from the recent Gen Threat Report echoes this, showing attackers have churned out hundreds of thousands of AI-generated scam websites this year.
Here are the top 5 ways scammers have exploited AI and deepfakes in 2025, plus what you can do to protect yourself.
1. Deepfake Voice Cloning Amplifying Vishing for Urgent Calls
Description:
Voice
cloning has become so accessible that anyone can replicate a
person’s voice with just a few seconds of audio. Scammers
use this to impersonate loved ones or trusted figures,
pushing victims into hasty
decisions.
Real-World Example:
In New Zealand, AI-enabled voice impersonation has emerged as a recognised scam risk. Recent information from BNZ suggested that voice cloning is now among the one of the top AI-related scam concerns for New Zealanders, reflecting growing awareness that scammers can use AI to convincingly mimic the voices of trusted people, including family members, during fraudulent phone calls.
Why It Works:
- Urgency and fear overwhelm rational thinking.
- Emotional connections make victims less sceptical.
- Background noise and tone mimic reality, reinforcing authenticity.
Protection Tips:
- Always verify requests through a known number, family member, or secondary contact.
- Create a family “safe word” for emergencies that only real family would know.
- Slow down and take time to verify. Scammers rely on panic to get you to act before you think.
2. VibeScams: AI Website Builders Fuelling Phishing
Description:
Scammers
are misusing AI website builders to create
professional-looking phishing sites in minutes. With just a
prompt, these platforms can clone an existing site’s
design, branding, and even customer service
features.
Real-World
Example:
Norton researchers documented fake
Coinbase logins, Microsoft Office 365 portals, DHL delivery
pages, and even localised tech support scams created with AI
website builders. According to our telemetry, web skimming
attempts, those where attackers inject malicious code into a
store’s checkout page to capture card numbers and billing
details, increased quarter over quarter, and New Zealand
registered a 416% increase, and more than 580 new malicious
AI-generated websites appear every day worldwide. Read the
full
research on VibeScams here.
Why It Works:
- Sites look visually identical to trusted brands, making detection difficult.
- Typosquatting tricks, such as “coiinbase” instead of “coinbase,” exploit quick glances.
- Low cost and instant setup mean scammers can relaunch endlessly.
Protection Tips:
- Double-check URLs carefully, especially for small spelling changes.
- Do not click links in unsolicited texts or emails. Use official apps or bookmarks.
- Use multi-factor authentication (MFA) to protect accounts even if a password is stolen.
- Install a reputable security solution that blocks known phishing domains.
3. AI-Powered Romance and Friendship Scams
Description:
Romance
scams are not new, but AI makes them more convincing.
Chatbots trained on large language models can hold
consistent, natural conversations around the clock. In 2025,
scammers are layering in deepfake videos to “prove”
their identities.
Real-World Example:
In New Zealand, AI-driven scams are increasingly using manipulated images and personal data to make deception feel real and personal. Avast researchers report that the risk of sextortion scams in New Zealand has risen by 137% in early 2025, driven using AI-generated deepfake images and highly targeted messages built from data stolen in past breaches. Victims receive convincing threats claiming access to explicit material, sometimes combined with accurate personal details, making the scams appear credible and difficult to dismiss.
Why It Works:
- AI provides consistency. It never gets tired, forgetful, or distracted.
- Deepfake videos eliminate one of the biggest red flags, avoiding live video calls.
- Victims invest emotionally, which makes financial requests harder to resist.
Protection Tips:
- Be cautious if an online partner avoids in-person meetings or constantly makes excuses.
- Watch for escalating asks, such as small requests turning into larger financial demands.
- Reverse image search photos to see if they appear on multiple unrelated profiles.
- Talk to a trusted friend before sending money to someone you only know online.
4. Business Email Compromise 2.0 (BEC with Deepfakes and Voice Clones)
Description:
Scammers
are evolving BEC beyond simple phishing. Using AI, they are
cloning the voices of executives and in some cases
generating convincing videos to lend credibility to
fraudulent instructions.
Real-World Example
(WPP, UK and Global):
According to The
Guardian, the CEO of WPP was targeted by scammers who
cloned his voice and used it on a fake Teams-style call. The
voice sounded authentic and instructed staff to share
sensitive access credentials and transfer funds under a
plausible pretext. While this case stopped short of a major
financial loss, it highlights how attackers are blending AI
audio and video with traditional BEC
tactics.
Why It Works:
- Hearing a familiar voice or seeing a familiar face overrides scepticism.
- Authority bias makes employees feel compelled to act quickly.
- Combining emails with voice or video lowers the chance of anyone demanding further verification.
Protection Tips:
- Require out-of-band verification, such as calling back on a known number, before transfers.
- Enforce dual approval for high-value or unusual payments.
- Train employees to pause and verify even when instructions seem urgent.
- Explore voice authentication and anomaly detection tools that flag suspicious audio or video.
5. Fake Celebrity Endorsement or Investment Scams
Description:
AI
deepfakes are increasingly used to create videos of
celebrities or financial leaders promoting fake investments,
miracle products, or crypto schemes. These scams spread
quickly on social media and can be difficult to distinguish
from real endorsements.
Real-World
Example:
In 2025, multiple deepfake
videos of Elon Musk circulated across YouTube and X
(formerly Twitter), promoting fraudulent crypto giveaways.
Victims believed they were sending funds directly to
Musk’s team, only to lose thousands of dollars. Similar
scams now feature actors, athletes, and even local
influencers.
Why It Works:
- Authority bias leads people to trust familiar faces.
- Viral sharing amplifies the scam before platforms can remove content.
- The combination of urgency and celebrity creates fear of missing out.
Protection Tips:
- Always verify endorsements through official websites or verified social accounts.
- Be sceptical of investment pitches that promise guaranteed returns.
- Report fraudulent videos immediately to the platform to speed up removal.
AI has supercharged old scams with new tricks, making them faster, more convincing, and more scalable than ever before.
The best defence is a layered one: stay sceptical, double-check requests and URLs, and protect your devices with strong security software, identity monitoring, and MFA.
In 2025, the biggest threat is not AI itself. It is how humans are manipulated into believing what AI makes possible.
UN Department of Global Communications: United Nations Proposes New Global Dashboard To Measure Progress Beyond GDP
Banking Ombudsman Scheme: Fraud Check Delays Well Worth The Inconvenience, Says Banking Ombudsman
Asia Pacific AML: NZ’s Financial Crime Gap - Beyond The 'Number 8 Wire' Mentality
Westpac New Zealand: Kiwi Households Adapting Despite Widespread Cost Pressure Concerns, Westpac Survey Shows
University of Auckland: Kids’ Screen Use Linked To Long-Term Deficits In Self-Control And Attention
University of Auckland: Research To Address Equity In STEM For Māori, Pacific And Female Students

