You don't have javascript enabled.

Will AI make chargeback fraud easier?

As Artificial Intelligence (AI) continues to revolutionise industries, its role in the battle against chargeback fraud is both promising and challenging.

While AI-powered systems enhance the detection and prevention of fraudulent claims, they also equip fraudsters with sophisticated tools to create false chargebacks. This dual-edged sword raises critical questions about the future of online security.

Could the same technology designed to protect businesses be turned against them?

  • Roger Alexander
  • July 1, 2024
  • 6 minutes

AI has been a vital part of anti-fraud and anti-chargeback operations for over a decade, long before the current wave of enthusiasm for AI (specifically, for large language models like ChatGPT). It is used for everything from checking individual chargeback claims to fixing the entire ‘broken’ chargeback process.

Given the sheer quantity of chargebacks, it would be impossible for human operators to examine and contest all of them, or even a small number. If companies want to avoid losing money to false chargeback claims, then AI is really the only solution – and yet it is a solution that may have unexpected consequences.

The same technology that allows companies to prevent chargebacks could also be used to automate creating false chargebacks that are far better ‘quality’ than those produced by amateurs. This would allow bad actors to work at a much larger scale, with chargeback claims that have a higher chance of not being detected. This would be a disaster for online merchants.

So, what are the capabilities of AI-powered fraud, and what can companies do to blunt its impact?

AI and large language models

The first and perhaps most important aspect of the current craze for AI is understanding the difference between ‘true’ AI, or artificial general intelligence, and the output of a large language model (LLM).

A large language model works by collecting and annotating vast amounts of written information to find patterns. For instance, it would ‘notice’ that the term ‘The Battle of Hastings’ often occurs alongside words like ‘1066’ and ‘William the Conqueror’ and is sophisticated enough to answer ‘1066’ to queries about the date of the battle and ‘William the Conqueror’ to questions about who won.

However, it has a serious problem with ‘hallucinations’, in which basic mistakes are made due to incomplete or noisy training data or a misunderstanding of the context. This prevents them from ever attaining ‘general artificial intelligence’ (AGI), the state of being truly indistinguishable from human intellect, and means that they are unsuitable for many commercial applications, especially where money is on the line.

There have already been cases of AI chatbots promising customers refunds they weren’t entitled to. Functionally, they are a real version of the Chinese Room thought experiment, in which a person is given a question in Chinese and matches it to an answer from a database, even if they don’t speak and therefore can’t understand Chinese. LLMs can’t understand human requests, but can convincingly match their output to our input.

AI and large language models

The machine learning algorithms used by anti-fraud companies aren’t designed to mimic humans – they are built to extract certain information from a dataset and, to the extent that they make ‘decisions’ with this information, following decision-trees rather than creating new solutions. These systems can be very sophisticated, up to the point of being able to improve themselves, but they are not ‘intelligence’ in any real sense, and perhaps this is for the best.

For this reason, LLMs have limited applications in fighting chargebacks. Being able to generate large amounts of relatively convincing (but often inaccurate) text isn’t going to move the needle on the epidemic of chargeback fraud, especially when existing machine-learning systems work extremely well.

Could LLMs commit chargeback fraud?

The short answer is… absolutely, and they likely already are.

While many chargebacks are raised by individuals, a significant portion is carried out by professional groups. For these groups, quantity is important – they may be able to get a few dollars from each fraud attempt, but by carrying out hundreds a day, they can make incredible amounts of money – at your company’s expense.

Just as it is impossible for human operators to deal with every chargeback attempt, it is also very difficult for fraudsters to deal with the ‘paperwork’ from making dozens of chargeback attempts each day. Not only do the chargebacks themselves have to be created, but they may also have to answer inquiries from card schemes, and they will have to do so very accurately, or they will be caught.

There is also the important step of building synthetic identities: no professional criminal would use their own identity, so they have to create identities from stolen information. Stealing this information is much easier when an LLM can produce vast amounts of convincing text at the touch of a button and continue to reply to targets just as an AI chatbot could.

It will not be perfect, and a canny internet user will be able to see the telltale signs of AI-generated content, but that won’t matter. Try enough times, and you will find somebody, likely an elderly or marginalised person, who is convinced by a modern equivalent of a ‘Nigerian Prince’ scam.

Combatting AI-enabled chargeback fraud

It seems entirely possible that LLMs can be used to create large amounts of relatively convincing written content to support fraud.

  • Is this the death knell for our efforts to fight chargeback fraud?
  • Now that fraudsters are using the latest generation of AI, are anti-fraud companies outgunned?

In a word, no. Creating written content is not a skill that is in very high demand when it comes to effectively carrying out online fraud, with the exception of gathering the data that can create a synthetic identity.

Anti-fraud systems used by every major payments company look for much more than written content: they analyse potentially thousands of signals, no matter how small and seemingly insignificant, to build a complete threat assessment of each transaction and chargeback request. Even if any written elements (which are likely to be minimal) are perfectly fine, there are still more than enough places where a fraudster can slip up, and our track record shows that our constantly updated systems are more than capable of handling AI-enabled fraud.

 


Roger Alexander is a key advisor to Chargebacks911’s Advisory Board and CEO, Monica Eaton. He assists with the company’s expansion, particularly the launch of its dispute resolution solution for APP fraud claims. With over 36 years in payments, Alexander has held leadership roles at Barclays, Switch (UK’s Debit Card), and Elavon Merchant Services Europe. He currently advises Tarci and Pennies and has held NED positions with ACI Worldwide, Caxton, and Valitor.