by Dan Holmes
4 minutes • AI & Technology • April 21, 2025
AI-Driven Fraud: How Criminals Are Using AI to Scam You
How much do criminals love using AI for fraud? Let LoveGPT—a malicious program that helps criminals commit romance scams—count the ways.
LoveGPT is among the latest AI-powered fraud tools criminals have embraced. It follows the release of similar malicious programs like FraudGPT and WormGPT. LoveGPT is so concerning because it was built to push a specific type of fraud—a long-form tactic frequently seeing higher volumes of payouts that exploit victims’ trust. Romance scammers no longer have to worry about remembering specifics when juggling different digital schemes.
The rise of these malicious programs shows how AI is empowering a new era of highly sophisticated fraud attacks. Imagine a world where phishing emails blend in seamlessly among your legitimate messages, immediately dismissing the education efforts of banks who have spent years informing consumers to look out for grammar and linguistic mistakes. Where criminals can use deepfakes to impersonate your loved ones or your boss. Or where criminals can simply create a highly realistic fake identity with just a few clicks.
How AI Is Supercharging Phishing Attacks
The truth is we don’t have to imagine this world. It’s already here. Not only are criminals rapidly inventing new AI-powered frauds, but they’re also making familiar ones even more effective.
Take phishing attacks, for example. Feedzai’s research found that SMS and phishing scams are the most common ways criminals use generative AI-based attacks. Criminals can instantly write phishing messages that are grammatically accurate, crisply written, and error-free, making them more believable and convincing.
2025 AI Trends in Fraud and Financial Crime Prevention
Financial institutions worldwide are adopting AI at a rapid pace. AI is now table stakes, meaning banks and financial institutions […]
What does this mean? First, phishing attacks are now more scalable than ever, lowering the barrier to entry and allowing bad actors to cast a wider net. Second, the attacks become more valuable. If a fraudster sends roughly 10,000 phishing emails and gets 100 responses, they achieve a 1% success rate. They see a more substantial return on investment and are armed with more consumer credentials, which they can subsequently attack.
Deepfakes & Voice Cloning: The New Face of Fraud
Voice cloning is another top concern. I have personally appeared in numerous videos and interviews, giving fraudsters plenty of material to make a convincing deepfake of my identity. Social media provides a treasure trove of media for fraudsters to collect and manipulate. Bad actors can easily clone a loved one’s voice and pretend to be in pain or legal trouble, startling unsuspecting family members into sending them money.
Think about it this way: if it becomes easier and more likely that these AI-powered scams will actually work, more people will likely try their hand at it. GenAI is essentially lowering the hurdles to scams while simultaneously raising the rewards, bringing even more players into the fraud and scams game. This raises the already-serious stakes of AI-fueled fraud.
As fraudsters leverage sophisticated GenAI tools, their success rates will climb, meaning banks can expect to face a surge in fraudulent transactions to defend against. Financial institutions must therefore ensure they can scale their fraud detection capabilities to manage this onslaught of attacks.
Fighting Back: How Banks & Businesses Can Stop AI-Driven Fraud
Educating consumers about how scams are evolving to mimic well-known or trusted figures is one step. Teach consumers important factors to uncover a deepfake video, image, or recording. To avoid succumbing to a voice cloning scam, for example, family members could establish a secret password to confirm the identity of the person on the other end of the phone before making a transaction.
But as technology improves, these educational efforts won’t be enough. Research in the Feedzai 2025 AI Trends in Fraud and Financial Crime Prevention shows 60% of financial institutions believe criminals are using advanced AI for voice cloning and other impersonation scams. It will take AI to fight AI.
Banks must make their own AI investments to find fraud earlier, enhance operational efficiencies, and invest in tools like data orchestration that enable faster fraud responses. Look at your technology partners and ask how they are innovating and how they plan to help protect your organization from these risks. With the rise of AI for fraud, it’s essential to know how your vendors are researching and developing their own AI-powered tools.
Criminals don’t wait for permission to innovate. They are increasingly making AI part of their fraud attack strategy. Banks need to match these efforts by launching their innovation labs and consider adding GenAI capabilities to their existing fraud ecosystems and solutions. Just as importantly, they should be holding their technology partners to the same standard, demanding the same level of AI-based ingenuity from their service agreements.
Scammers won’t slow down. If your financial institution is waiting to invest in AI-driven fraud detection, prevention, and automation, be warned: you’re letting the bad guys stay in the lead.
- Blog: What is a Deepfake and How Do They Impact Fraud?
- Resource: Coming Soon: Feedzai’s Flagship AI Report with Global Industry Insights
- Solution Sheet: Feedzai as a Service
- Solution: ScamAlert: Use GenerativeAI to Stop Scams
All expertise and insights are from human Feedzians, but we may leverage AI to enhance phrasing or efficiency. Welcome to the future.