by Anusha Parisutham
8 minutes • AI & Technology • August 13, 2025
GenAI Fraud: What It Is, How It Works, and How to Stop It
GenerativeAI (GenAI) isn’t a buzzword. In just a few short years, it has transformed from a radical new concept to a widely used practice across multiple businesses and industries, including financial services. Unfortunately, criminals are among the adopters adding GenAI techniques into their methods. Losses from GenAI fraud could reach $40 billion by 2027, according to projections by Deloitte.1
In this article, we’ll break down the inner workings of GenAI fraud and how banks and businesses can implement AI-driven fraud prevention strategies to protect themselves and their customers from these advanced threats.
Key Takeaways
- In Generative AI fraud (or GenAI fraud), criminals use advanced AI tools to create realistic images, videos, audio, or text to push fraud or scams rapidly.
- GenAI fraud losses are projected to reach $40 billion by 2027, based on analysis by Deloitte.2
- Common GenAI fraud tactics include deepfakes, voice cloning, phishing attacks, social engineering, fake data creation, document manipulation, and more.
- To combat GenAI-fraud, banks need a layered fraud prevention strategy that integrates behavioral analytics, device fingerprinting, and AI-based anomaly detection, complemented by their own GenAI-powered tools for identifying red flags associated with deepfakes and other scams.
What Is GenAI Fraud?
Generative AI fraud is the abuse of sophisticated AI tools used to create realistic-looking images and videos, error-free text, or convincing audio to carry out fraud and scams. Programs like FraudGPT, WormGPT, or LoveGPT offer criminals easy access to technology capable of quickly creating deepfakes, manipulating audio recordings, or drafting personalized phishing emails or texts.
2025 AI Trends in Fraud and Financial Crime Prevention
Feedzai’s survey of 562 financial professionals shows the industry adjusting to new data responsibilities due to rapid AI adoption.
With these realistic-looking or sounding materials, criminals can launch sophisticated attacks like synthetic identity fraud, imposter fraud, or romance scams at scale. They can also create convincing document forgeries and synthetic data to sidestep security controls.
Why GenAI Fraud Is Growing So Quickly
GenAI has rapidly been added to many bad actors’ toolkits in a very short period. Here’s why GenAI-based fraud has surged in recent years.
- Advanced AI Tools are Easily Accessible: The barrier to entry for GenAI fraud has never been lower. Powerful open-source GenAI models (e.g., LLaMA, Stable Diffusion) are easily accessible to anyone, including criminals with malicious intent. This means virtually anyone has the tools needed to create convincing fake content with little or no technical expertise.
- Automation of Criminal Tactics: GenAI enables scams to be produced at a massive scale. This includes voice cloning scams, deepfakes, or phishing attacks to impersonate company executives, create synthetic documentation for onboarding fraud, or persuasive phishing messages tailored to specific profiles at scale.
- Traditional Defense Evasion: GenAI can mimic legitimate data and communications (e.g., generating personalized emails, real-time voice cloning). With these capabilities, fraudsters can bypass rules-based detection, outmaneuvering both humans and machines.
- The Rise of Fraud-as-a-Service: Criminals have created a thriving underworld economy where knowledge, tactics, and technologies are readily shared. Bad actors share GenAI models, scripts, and ready-made fake content in these illicit markets. This free exchange of information opens numerous opportunities for bad actors, expands the fraud ecosystem, and increases the attack volume.
“GenAI is essentially lowering the hurdles to scams while simultaneously raising the rewards, bringing even more players into the fraud and scams game. This raises the already-serious stakes of AI-fueled fraud.” – Dan Holmes, Director of Fraud and Identity SME at Feedzai.
Top GenAI Fraud Techniques and How They Work
A recent Feedzai survey of 562 global financial services professionals found the most common use cases for GenAI in fraud include:
- Voice Cloning (60%): By recording an unsuspecting person’s voice, bad actors can convince call center staff to grant them access to the legitimate user’s account or approve transactions.
- SMS/Phishing Scams (59%): AI can craft convincing, error-free, and grammatically accurate fake text messages or emails that sound like real people. Because the communications are compelling, criminals are more likely to deceive users into clicking malicious links, sharing sensitive information, or transferring money.
- Deepfakes (44%): Criminals are using AI-generated fake videos and audio deepfakes to pose as CEOs and other C-suite executives for Business Email Compromise (BEC) attacks. They are also impersonating police, bank personnel, or even celebrities for romance scams.
In addition to these tactics, GenAI can also be used for creating new identities and automating widespread fraud and scam attacks. These methods include:
- Synthetic Identity Fraud: Criminals combine real and created data to build new “identities” that can be used for digital onboarding with banks or businesses. Synthetic identity fraud is considered the fastest-growing type of financial crime, with Deloitte projecting losses to reach $23 billion by 2030.3
- Document Forgery: GenAI can generate or manipulate critical documents such as financial statements, invoices, pay stubs, and identity documents. In the US, the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) recently issued a warning to financial institutions to review identification documents carefully due to “suspected use of deepfake media in fraud schemes targeting their institutions and customers.”4
- Automated Scam Calls and Chatbots: AI-driven chatbots and voice clones can be automated and launched to multiple targets simultaneously. Malicious programs like LoveGPT, for example, can be used to target multiple romance scam victims at once. The programs use LLMs to refine messaging for specific targets.
- Synthetic Transaction Data Creation: Criminals can use GenAI to create fabricated transaction activity. Using this data, they can test evasion techniques against anti-fraud systems to understand how to avoid detection by fraud prevention controls.
How to Detect & Prevent GenAI Fraud
The rapid rise of GenAI fraud requires an equally rapid response from financial institutions. The good news is that 96% of banks worldwide have implemented GenAI, according to Feedzia’s 2025 AI Trends in Fraud and Financial Crime Prevention report. But to be effective, banks need a robust strategy to catch GenAI fraud and stay on top of new tactics.
- Layered AI-powered Defenses: Implement behavioral analytics, device fingerprinting, and AI-based anomaly detection to spot subtle shifts in user and transaction patterns that GenAI forgeries may trigger.
- Deepfake and Synthetic Content Detection: Use specialized tools to analyze media for signs of AI manipulation. These solutions should be capable of reviewing content for inconsistencies in images, audio, video, or written style.
- Dynamic KYC & Continuous Authentication: Move beyond static document checks. Use biometric authentication, liveness tests, and ongoing behavioral monitoring to detect synthetic identities and impersonation attempts.
- Employee and Customer Education: Train your staff and your customers about the risks of GenAI fraud. Provide regular updates about emerging trends, especially deepfakes, voice cloning, and hyper-personalized phishing.
- Threat Intelligence Sharing: Participate in industry consortia to exchange information about new GenAI-powered scams, attack vectors, and fraudster toolkits. Enhanced your knowledge using federated learning that allows you to gain and share fraud-related insights with other organizations without compromising data privacy.
“Leveraging AI’s continually improving and detailed understanding of scams, GenAI enhances advanced scam detection, offering our clients dependable and insightful information.” – Catarina Godinho, Senior Product Marketing Specialist at Feedzai
Ethical and Security Risks of GenAI in Financial Fraud
In addition to fraud threats, banks should also consider the ethical considerations of using GenAI in their own solutions. GenAI is a powerful tool. But it has transformative capabilities that banks must consider as they look to include this technology in their roadmap. These considerations include how to responsibly handle data used to train both GenAI and AI models, as well as essential privacy and security questions.
Here are some of the most important questions to consider when implementing GenAI.
- Keep Data Private, Ensure Consent: Data privacy concerns were the top issue identified in the Feedzai 2025 AI Trends in Fraud and Financial Crime report. AI and GenAI require large volumes of data for training. Data must be managed in a way that follows strict data privacy and security regulations like GDPR and CCPA.
- Data Management and Accuracy Challenges: Questions over how to handle large volumes of data were the top issue cited in Feedzai’s survey of global financial services professionals. As digital banking transactions rise, 87% said they were concerned about how to manage data accurately. Issues like incomplete, inaccurate, or improperly handled data can result in poor model performance. Concerns over data handling were cited by 59% of respondents as a reason their bank has not adopted AI yet.
- Data Hallucinations: While GenAI is powerful, it can sometimes “hallucinate,” producing information that sounds plausible but is actually incorrect or nonsensical. These inaccuracies can result in costly mistakes, damage customer trust, and regulatory issues. Ensuring the factual accuracy of GenAI’s output is critical for reliable and responsible GenAI deployment.5
- Ethical AI and Bias: Respondents also voiced concerns over ethical AI and bias. Biased data can result in discriminatory risk decisions or incorrectly flagging transactions as false positives, leading to unfair outcomes for customers.
- Explainability and Transparency: Organizations must be prepared to justify decisions made by AI. Ensure your models are explainable to justify decisions to any relevant regulator or authority.
- Job Displacement: AI promises to make life easier for many people. However, there is sometimes a trade-off. Feedzai’s research found that 26% of respondents are concerned about losing their jobs to AI. Nearly half (46%) believe the technology will replace many roles and positions.
How Feedzai Uses Generative AI to Enhance Fraud Prevention
Feedzai is putting GenAI on the frontlines against fraud and scams with several of our solutions. This includes:
- Empowering Customers with ScamAlert: ScamAlert is Feedzai’s flagship GenAI tool. It enables customers to safeguard themselves against scams by leveraging the rapid capabilities of GenAI. When a customer comes across a suspicious-looking ad for a too-good-to-be-true offer, an invoice, text message, or other communication option. Using a screenshot, ScamAlert quickly analyzes the image for red flags, like unrealistic offers or urgent pressure tactics. From there, it recommends next steps, such as verifying the sender or using only secure payment methods. This helps customers avoid scams before authorizing risky transactions, turning them into active defenders rather than passive victims.
- TRUST Framework: Feedzai launched the TRUST Framework, a guide for banks to ethically develop and deploy AI from the beginning. TRUST is based on five key pillars: Transparent, Robust, Unbiased, Secure, and Tested, to ensure that AI systems are explainable, resilient, fair, and secure, and rigorously tested before deployment. Using the TRUST framework, financial institutions can turn ethical imperatives (including privacy, fairness, and explainability) into competitive advantages.
- Case Summary Agent: Fraud analysts spend too much time sifting through data to understand an alerted transaction. Key information can easily be missed because subtle patterns aren’t always obvious. Feedzai’s Case Summary Agent streamlines this process by identifying what’s truly unusual in an alert, highlighting the specific areas that require a closer investigation. It also provides clear explanations for why patterns are considered suspicious.
- Rule Creator: Manually creating rules can be prone to errors and is often a time-consuming process, especially for those who aren’t experts in the language of rules. Rule Creator changes that by generating new rules based on a simple user prompt. It interactively guides users through the rule creation process, minimizing redundancies and eliminating inefficient rules, resulting in faster rule creation, quicker time to value, and enhanced team efficiency.
As fraudsters increasingly embrace GenAI fraud to scale attacks and evade traditional detection methods, banks and businesses must respond with equal urgency. It’s essential to leverage advanced AI-driven defenses, continuous employee training, and robust ethical frameworks. The challenge ahead is not just to innovate, but to do so responsibly, ensuring that as organizations adopt new AI capabilities, they also safeguard customers, protect customer trust, and stay one step ahead of new GenAI fraud tactics.
Resources
Frequently Asked Questions about GenAI Fraud
What is GenAI fraud?
GenAI fraud involves using generative AI to create convincing fake content like deepfakes (manipulated audio/video), synthetic identities, and realistic phishing emails or text messages. Fraudsters leverage GenAI to automate and scale scams, making it easier to bypass traditional security and verification methods, ultimately leading to financial losses and identity theft.
Can banks detect AI deepfakes?
Banks are implementing multi-layered approaches to detect AI deepfakes. While traditional voice authentication can be vulnerable, many now use:
- Multi-factor authentication (MFA)
- Liveness detection (verifying a real person)
- Behavioral biometrics (analyzing unique user patterns like mouse movements, keystrokes, and touchscreen patterns)
- Real-time fraud detection systems with machine learning. Training staff to recognize deepfake red flags is also crucial.
Is GenAI fraud regulated?
While no single, comprehensive regulation specifically covers GenAI fraud, financial authorities like the US FinCEN (Financial Crimes Enforcement Network) issue alerts to financial institutions regarding its rise. These agencies advise on best practices for detection and prevention, emphasizing the need for robust fraud risk management frameworks and continuous monitoring to comply with existing anti-fraud and financial crime regulations.
Footnotes
4 https://www.fincen.gov/sites/default/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf
5 https://www.ibm.com/think/news/llm-hallucination-human-cognition
All expertise and insights are from human Feedzaians, but we may leverage AI to enhance phrasing or efficiency. Welcome to the future.