by Dan Holmes10 minutes • Fraud & Scams • September 10, 2025
Understanding Social Engineering Fraud: From Psychological Traps to AI‑Driven Fraud
Social engineering scams may start with a phone call from someone impersonating a loved one, claiming to need your help. Or it might be an urgent email that appears to be from your boss. Or a text message from a government agency warning you about a hefty fine you didn’t expect. According to a 2023 report from the Federal Trade Commission, these scams have already cost US consumers $12.5 billion in losses.1
The rise of generative AI technology makes social engineering scams increasingly difficult to catch in the moment. Newer methods are already being widely used, targeting both human emotions and trusted technologies.
Dive into how social engineering fraud works, prevalent scam methods, and the measures banks can take to protect their customers when it’s harder than ever to trust the person on the other end of a phone call or text message.
Key Takeaways
- Social engineering scams rely on psychological manipulation to coerce victims to share information or transfer money.
- Common social engineering tactics include phishing, smishing, vishing, business email compromise, and water-holing.
- GenAI has amplified the social engineering threat through deepfakes, voice cloning, and malicious programs like FraudGPT.
- Banks need to take a “hyper-personalized” approach to customer data to catch social engineering activity and stop scams before money is lost.
- “Customers are being asked to either give up their card information or make card-based transactions to merchants that the fraudster is then able to benefit from.” – Andy Renshaw, SVP of Product Strategy and Management at Feedzai.
What Is Social Engineering and Why It Matters Today
Well-known social engineering methods include phishing and its variations, such as smishing, vishing, business email compromise, and water-holing, among others. Fraudsters can use carefully-crafted deceptions or “personal lures” that resonate with victims on an emotional level by studying a target’s social media profile, silently observing their everyday behaviors, and monitoring their online or digital activities.
These methods have undergone a considerable evolution in the past few decades. Remember the days of phishing messages from supposed deposed Nigerian princes looking for help moving vast sums of money? Eventually, people grew wise to these schemes. As a result, criminals realized that knowing specific details about their targets is key to making scams more convincing. By demonstrating familiarity, fraudsters can persuade targets to lower their guard, leading to lost money and compromised credentials.
In 2024, social engineering fraud contributed significantly to the $16.6 billion in losses reported by the US FBI, impacting consumers.2 As scams become increasingly personalized, banks and financial institutions require hyper-personalized data strategies to protect their customers.
How Social Engineering Works: Common Attack Methods
Social engineering attacks take on many familiar tactics, notably variations of phishing messages. Here are some of the prevailing social engineering tactics employed by scammers.
Phishing
Deceptive emails/SMS from trusted entities (e.g., a boss, government agency, law enforcement, well-known retailers, or even loved ones) coercing victims into divulging credentials or making payments to resolve an urgent problem.
Spear Phishing
A spear phishing attack is a more targeted social engineering tactic that targets a specific individual within a company (often someone authorized to approve payments), deceiving them into making a payment.
“We’ve traditionally thought about social engineering scams as a payment-type fraud…We’re now starting to see that migrate into the cards-type channels, where customers are actually being asked to either give up their card information or make card-based transactions to merchants that the fraudster is then able to benefit from.” – Andy Renshaw, SVP of Product Strategy and Management, Feedzai
Business Email Compromise (BEC)
In a BEC attack, a fraudster spoofs a company email to deceive other employees, customers, or partners. They’ll impersonate a CEO or a high-level executive and create a sense of urgency to bypass normal verification processes. BEC fraud has one of the highest rates of fraud losses at $2.8 billion, according to the FBI’s Internet Crime Complaint Center (IC3).2
Clone Phishing
In a clone phishing attack, criminals send emails that strongly resemble messages from a bank, a postal service, or a retailer. The messages from trustworthy-looking entities convince victims to click on suspicious links or share one-time passcodes (OTPs).
Smishing
Smishing is an SMS-specific phishing tactic in which criminals send SMS messages pretending to be trusted figures. Similar to other phishing attacks, these messages are designed to appear as if they come from trusted sources, aiming to persuade victims into clicking on malicious links or divulging personal information.
Vishing
Vishing is a voice-specific phishing method where criminals rely on live voice calls with victims. Criminals pretend to be tech support, a tax official, or a law enforcement agent, calling with an urgent matter. The longer a victim engages in conversation, the more likely they are to follow a scammer’s instructions.
Pretexting
In pretexting events, scammers create a specific scenario, such as pretending to be a co-worker, an HR representative, or a vendor, and request information like bank details or changes to an existing account. This pretext gives the scam credibility, leaving many victims more likely to lower their guard.
Water-holing
Criminals will use a favorite website visited by their targets or by employees of a specific company. From there, a pop-up message appears, installing malware that requests their login credentials or directs them to another page.
How Generative AI Is Transforming Social Engineering
As if these methods weren’t concerning enough, advancements in AI technology now enable criminals to take social engineering scams to new heights. GenAI is both widely available and has a low barrier to entry, meaning that criminals can easily learn new scam tactics and launch attacks at unprecedented scale.
With an expanding field of GenAI-powered tools and many scammers ready to test new tactics, a perfect storm for social engineering takes shape. As more scammers enter the field, the social engineering efforts will become more valuable, raising the rewards for criminals.
Deloitte projects that GenAI-related losses will reach $40 billion by 2027.3 Some of the top GenAI-based social engineering threats to monitor include:
GenAI-based Phishing
GenAI promises to enhance the performance of traditional phishing attacks and their variations. Using Large Language Models (LLMs), criminals can draft error-free messages that avoid grammatical and spelling mistakes that could be used to spot phishing attacks. Additionally, criminals can feed AI models public information from sources like LinkedIn or public statements. With this input, the models can closely mimic the voice and tone of a senior executive or industry professional, elevating the effectiveness of phishing attacks.
Automated Agentic Attacks
GenAI-powered agents can create and manage entire processes to automate social engineering attacks.4 These agents begin by collecting data from multiple sources, including social media feeds, public websites, and the dark web.5 Next, they tailor messages to specific profiles, making the messaging more believable. Finally, they can blast emails to a large number of targets, increasing their chance of payout.
Deepfakes
Criminals manipulate images of real people into realistic-looking video and audio. Feedzai’s 2025 AI Trends in Fraud and Financial Crime Prevention report found that 44% of global financial services professionals say criminals are already using deepfakes. Deepfake-related fraud reports surged by a staggering 1,740% from 2022 to 2023, according to data from the World Economic Forum.6 In one high-profile case, an employee of a Hong Kong-based financial firm was manipulated into paying $25 million by a deepfake fraud.7
Voice Cloning
Voice cloning is the most common form of GenAI-powered fraud encountered by financial professionals, according to Feedzai’s AI research at 60%. Criminals manipulate audio recordings to convince victims that a loved one is in trouble and needs money for a lawyer or a medical emergency. Recently, US authorities arrested several individuals for operating an elderly scam ring that resulted in $5 million in losses across more than 400 victims.8
FraudGPT
FraudGPT, as its name suggests, is a malicious program based on OpenAI’s LLM, ChatGPT. It was created specifically for criminal purposes, creating phishing scams or drafting personalized messages for targets.
The Psychology Behind Social Engineering Attacks
For scammers, successful social engineering is the key criminals use to quickly gain and abuse victims’ trust in four quick stages. It starts with communication that rattles the recipient, disturbing their sense of safety. By the time the cycle has finished, the criminal has manipulated the victim into trusting them enough to transfer money or share their sensitive information.
Here’s how a social engineering attack relies on psychological deception.
Social Engineering Methods: A Psychological Guide
Stage
Description
1. Establish Authority & Trust
Scammers impersonate a trusted figure (e.g., a boss, government official, family member, or IT specialist) to lower the victim’s guard and instantly build credibility.
2. Urgency
They introduce a fake emergency, such as claiming a bank account is compromised or a loved one is in trouble, to trigger panic and pressure the victim into acting without thinking.
3. Reciprocity
Next, the scammer offers to “help” resolve the fabricated crisis, asking the victim for money, account access, or sensitive personal information in return.
4. Manipulation
By keeping the conversation going, the attacker builds rapport and trust, making the victim more likely to hand over money or confidential details before they realize something is wrong.
This pattern illustrates why social engineering is such a powerful tool for criminal activity. Criminals understand that at the end of the day, we’re all human beings who need security and companionship. These scams instantly prey on people’s fears, forcing the victim to trust the person delivering the alarming news quickly. At this point, the victim will listen to scammers, believing their guidance is the best chance to feel safe again.
Social Engineering Trends to Monitor in 2025
Newer social engineering attacks are still emerging that abuse victims’ trust by pretending to be trusted technological applications. Some of the recent social engineering trends to monitor include:
CAPTCHA Scam Abuse
It’s common to be asked, “Are you a robot?” when you’re online and asked to fill out a short quiz to ensure you’re really a person, not a malicious bot. Unfortunately, criminals are now exploiting this trusted web verification tactic to place fake CAPTCHA screens that look real. These bogus screens deceive visitors into copying and pasting malicious code onto their devices. According to a Hacker News report in August 2025, this method, known as “ClickFix,” has become increasingly prevalent over the past few months.9
How Feedzai Closes the WhatsApp Scams Loophole
WhatsApp scams are driven by social engineering, not software. This ebook details the anatomy of modern impersonation scams, why “trusted” apps are now the biggest threat vector, and how Feedzai detects scam behavior before money leaves the account.

Stealth Manipulation
Another recent social engineering tactic takes a different approach. While ClickFix deceives victims into copying and pasting sensitive information onto their devices, a new program called FileFix embeds malicious code directly onto a Windows Explorer window. Visitors to the infected window are prompted to open what appears to be a shared file. If opened, it can install malicious code onto the user’s device.
Advanced Deepfakes & Voice Cloning
Deepfakes and voice cloning techniques are getting more sophisticated and convincing. What’s even more disturbing is that they can now unfold in real time. Among the more notorious groups using these methods is the Yahoo Boys, a collection of scammers who use deepfakes to push romance scams, according to Wired.10
How Feedzai Fights Social Engineering Fraud
Criminals have a wealth of tactics to push social engineering scams. The convincing nature of these scams makes it more challenging than ever to distinguish reality from fraud. But banks have a key advantage to protect their customers: data.
Banks need a “hyper-personalized” approach to data that looks beyond traditional data-driven strategies to proactively prevent fraud and financial crime. Here’s how hyper-personalization empowers banks to use customer data to stop social engineering scams.
“Having the data is important. Having that data in real time as it occurs is even more important…picking what is predictive and what is not predictive ensures you’re not wasting a lot of time collecting data that doesn’t meet your goals.” – Jas Anand, Senior Fraud Executive, Feedzai
Understand ‘Normal’ Customer Behavior
Banks need to understand the “normal” behavior patterns of their customers. This goes beyond transactional data (e.g., when and how much a customer spends) to include interaction data. Review how often criminals engage with a website, contact a call center, or visit a bank branch in person. By monitoring these behaviors, your bank can spot changes or signs that a consumer is under the sway of a social engineering scam.
Behavioral Biometrics
Behavioral biometrics provides a continuous authentication layer that silently monitors user-device interactions, such as typing speed, mouse movements, and touchscreen pressure, to confirm identity and intent. This helps banks detect when legitimate users act suspiciously, a sign they might be under the influence of a scammer.
Real-Time Data Processing and Predictive Analytics
Knowing your customer’s normal behavior yesterday isn’t enough when scams move at a rapid pace. It’s critical to be able to process large volumes of data in real time while the customer is still transacting. Banks can accurately prevent scam losses by utilizing AI and machine learning for real-time data monitoring. To do this effectively, focus on the data that is predictive of fraud to avoid wasting time on irrelevant data types.
Leverage Aggregated and Consortium Data
It’s not enough to rely on your own organization’s data. By tapping into shared intelligence insights from other organizations, you can gain new insights into various fraud and social engineering scam trends. Aggregating information and knowledge empowers your organization to anticipate and respond quickly to new threats.
Expand Scam ‘Kill Chain’ Opportunities
Hyper-personalized data can also enable “just-in-time” training and awareness prompts for customers. For example, let’s say a customer suddenly transfers large sums of money from a savings account to an investment account. It’s a good opportunity to warn them about the threat of investment or cryptocurrency scams. Maximizing the usage of customer data enables your bank to deliver highly relevant information and knowledge to customers at the right time.
Conclusion
Gone are the days of the Nigerian prince sending random emails riddled with grammatical and spelling errors. Today’s fraud efforts are becoming increasingly personalized to quickly gain a victim’s trust before they can stop and reconsider.
Social engineering fraud means banks face pressure to protect their customers’ trust at a crucial moment when they may trust a criminal’s lies. Using hyper-personalized data, banks can demonstrate to their customers how well they know their behaviors and cement their trust before they lose money.
Resources
- Blog: The Psychological Impact of Scams
- Report: 2025 AI Trends in Fraud and Financial Crime Prevention
- Solution Guide: How Feedzai Closes the WhatsApp Scams Loophole
- Solution: Scam Detection & Prevention Solutions
Frequently Asked Questions about Social Engineering Fraud
What are common social engineering techniques?
Common social engineering techniques include phishing, pretexting, and water-holing, all designed to deceive victims. More advanced methods, such as impersonation and business email compromise (BEC), involve attackers posing as trusted individuals or authority figures. These tactics manipulate human psychology, exploiting a victim’s trust to circumvent technological defenses and security protocols.
How does AI enhance social engineering threats?
AI enhances the effectiveness and scalability of social engineering. It allows attackers to create highly personalized lures and automate their campaigns. AI-generated deepfakes and convincing chatbots make scams feel incredibly real, increasing the likelihood of success and making these threats much harder to recognize and defend against.
What is social engineering in cybersecurity?
Social engineering is a psychological manipulation tactic in cybersecurity. Rather than hacking a system, it exploits human nature, like curiosity or fear, to persuade people to compromise security protocols. This can include revealing sensitive information or granting access to secure systems, making humans the most critical vulnerability.
How can organizations effectively defend against these threats?
To combat these threats, organizations should implement multi-layered security. This involves using real-time AI and machine learning detection systems, alongside behavioral biometrics, to identify unusual activity. Continuous user education and proactive simulation training are essential for equipping employees to recognize and resist these deceptive attacks.
Can AI help stop AI-enhanced attacks?
Yes. AI is an effective tool against AI-enhanced attacks. Advanced AI/ML solutions analyze vast datasets to detect anomalies and behavioral patterns that indicate a scam. Feedzai’s AI/ML platform detects anomalies in real time, helping banks stop AI-enhanced scams before funds leave accounts.
Footnotes
2 https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf
4 https://www.ibm.com/think/insights/generative-ai-social-engineering
6 https://www.weforum.org/stories/2025/07/why-detecting-dangerous-ai-is-key-to-keeping-trust-alive/
7 https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
8 https://www.boston.com/news/crime/2025/08/12/international-elder-fraud-ring/
9 https://thehackernews.com/2025/08/clickfix-malware-campaign-exploits.html
10 https://www.wired.com/story/yahoo-boys-real-time-deepfake-scams/
All expertise and insights are from human Feedzaians, but we may leverage AI to enhance phrasing or efficiency. Welcome to the future.