Woman reviewing data on tablet

Protecting your business from AI scams 

Americans lost $12.5 billion to scams in 2023 — and the rise of AI will only accelerate the problem. Today’s fraudsters routinely use AI tools to clone individuals’ voices, create realistic-looking deep fakes or generate credible-sounding text for use in their schemes. All of this makes scams harder to detect and prevent, posing a serious challenge for fraud operations experts. 


Article at a glance 

  • AI-driven scams are becoming extremely sophisticated and lucrative, threatening to overwhelm traditional security protections.  
  • Fraud operations experts must fight fire with fire, leveraging AI-powered identity solutions to flag fraudulent users and transactions fast.  
  • New AI-powered scams will continue to emerge. It’s up to fraud operations experts to implement the security infrastructure that allows them to respond to these new scams in a timely manner.   

Fortunately, the technology behind these scams is also your best tool to fight them. AI-driven fraud prevention techniques and risk detection tools can help you detect and mitigate AI scam activity at scale. Ultimately, investing in your own AI tools will help you keep your users safe at every stage of the customer journey.  

Developers collaborating at workstation

How is AI used in scams? 

In early 2024, a Hong Kong employee of a multinational company received what appeared to be a $25.6 million transfer request from his chief financial officer (CFO). Initially suspicious, he hopped on a video call with the CFO and other colleagues to verify the validity of the request. What he saw convinced him the request was legitimate, so he transferred the money. Little did he know, not only was the initial message he received AI-generated, but the people he thought he’d seen on the video call were deepfakes. He had just transferred millions of dollars into the hands of extremely sophisticated AI scammers. 

While this is an extreme example, it shows just how complex AI scams are becoming, as well as the limitations of traditional security protocols in protecting against them. After all, the employee followed traditional best practices after receiving the suspicious message in reaching out to his CFO. With the help of AI, the fraudsters were just one step ahead.  

Fraudsters are now able to leverage multi-faceted, AI-driven scams at every stage of the customer journey. For example, they could use deepfakes to impersonate a legitimate customer at account opening, or use AI-generated phishing emails to steal a user’s password and initiate fraudulent transactions from their existing account. All of these tactics present significant risks for financial institutions, e-commerce vendors, and any other organization that’s looking to protect sensitive financial and customer data.  

However, many fraud operations experts don’t have the security infrastructure to meet fraudsters where they are — especially at scale. For example, long manual review queues may mean scams don’t get flagged until too late. Addressing the AI scam threat likely means upleveling your security protections with new, sophisticated tools. 

How AI fraud prevention tools can fight AI scams 

The best way to prevent AI scams? Get to know your users. AI tools can help you verify that customers are who they say they are at every step of their journey.  

  • Account opening fraud: Scammers often adopt false identities to apply for credit fraudulently or to open “mule” accounts where they can receive stolen funds. AI-driven identity insights and risk scores can help you identify and block these fraudsters quickly, before they can do damage. 
  • Account takeover attacks: Behavioral biometrics tools can flag unusual behavior that could be a sign of an account takeover attack, such as typing cadences that don’t match a known user’s usual signatures. Use AI to automate your response and lock out fraudsters fast, without creating unnecessary friction for trusted users. 
  • Scam payments: For financial institutions specifically, AI-enhanced banking intelligence can help stop scam payments. Real-time analysis of money flows lets you flag and block suspicious transactions before money changes hands. Validation of account details via user-permissioned open banking data also boosts security while keeping payments frictionless. 

The time to respond to AI-powered scams is now 

We’re only just beginning to feel the impact of AI-driven scams. You might even know someone who’s fallen victim to one of these hard-to-detect and costly schemes. Educating individuals about scam tactics only goes so far — after all, the victim in Hong Kong followed correct protocols to confirm their transaction, and still lost millions of dollars to fraudsters.  

To protect against AI scams on an enterprise level, you need greater insight into users’ identities at every touchpoint, from account opening to initiating a transaction. This means integrating advanced identity engines to validate that your users are who they say they are, as well as flagging suspicious money transfers and other financial activities. This approach can help your team stay one step ahead of scammers, even as new forms of AI-powered fraud come to the surface.  

To learn how Mastercard Identity solutions can boost both your security and your bottom line, visit our ROI Fraud Calculator


Mastercard Identity Avatar

About the Author

Related content