Banks and regulators are waking up to the anti-money laundering (AML) and fraud detection potential of artificial intelligence (AI), but attempts by some software vendors to cash in on the AI buzz are “laughable”, according to Marc Andrews, vice president for financial crime at IBM Watson.
“In 2017, you could go to a conference and you see the different vendors, their booths and marketing material and a lot of it was about helping your investigation, generating alerts. Maybe you might have seen one or two booths that even reference AI. I would say that 80% of the booths now have the words AI or artificial intelligence in their marketing.”
According to the AI Index, the number of active AI startups doubled between 2015 and 2018, as AI venture capital funding saw a 450% increase. “Some vendors I walk by and talk to them and when you see what they’re doing it’s laughable. They’re using AI as a buzzword.”
“AI is really a category of various techniques for application. There are some vendors that are talking about their solutions being AI, when they're nothing more than rules-based systems. Then there are others that may be doing one technique, but there is much more to it than that.”
Pressure has been mounting on banks in recent times, says Andrews, on the back of Russian money laundering scandals. “We’re seeing continuous and growing fines that banks are facing around this, we see incremental requirements for conducting enhanced due diligence in a lot of the geographies where they want enhanced due diligence conducted not just on the high-risk customers, but on some of the medium risk customers and banks, as other parts of the business are starting to open up and do well.”
In the first half of 2018, regulators imposed more than $1.7bn in fines related to anti-money laundering, according to a Debevoise report, $300m shy of figure for both halves of 2017. Fenergo data reports that banks and financial institutions have been fined $26bn since 2008 for poor AML compliance.
For the banks, the only way to address increasingly strict regulatory requirements is to add bodies. “That's really the current approach, just add people. [Banks] are now finding that that's not sustainable and as a result, they're looking for alternative ways to drive the efficiency of their existing people and improve the effectiveness of their techniques for monitoring and reviewing the transactions and alerts that are being generated today.”
When money laundering and fraud problems began to rise again after the global financial crises, says Andrews, the initial answer for most organizations was “how do we reduce costs?” This led to the creation of offshore AML units in places like India “to just lower the cost of compliance people”. “Instead of paying $50,000 to $60,000 a year for a mid-level analyst in the US, they would pay $10,000 to $20,000 in India. Just because it's half the cost doesn't mean they're doing the same amount of work for half the cost.” Banks have reached the limit when it comes to savings via offshore operations, says Andrews. “I think that's why now they're looking for new techniques and new approaches.”
Pressure is starting to come from regulators now, too, he adds. “There's a bank that we've been having these discussions with for over a year now. They were always interested in the technology, and they loved the demos we showed them. But they were always very hesitant to move forward. One of their big concerns was regulatory support. Interestingly, just recently, they had gone quiet, but called us last week and said that they are now getting pressure from the regulators, which are starting to give them pressure to start evaluating the use of artificial intelligence technologies in this space and telling them that they need to be a little bit more innovative.”
In December 2018, the US Treasury Department’s anti-money laundering unit (finCEN) and federal banking regulators issued a joint statement encouraging banks and credit unions to take “innovative approaches to combating money ;laundering, terrorist financing and other illicit financial threats.” It ruled that pilot programs undertaken by banks involving artificial intelligence “will not necessarily result in supervisory action with respect to that program”. FinCEN wrote: “For example, when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not automatically assume that the banks’ existing processes are deficient.”
“We should give [regulators] credit for stepping up. But I wouldn't say they've moved quickly. Like I said, this is stuff we've been talking to them for two years. Now they’re coming together to recommend new ways of tackling money laundering. But for a while the banks have been deputized to do the work of the governments and the regulators in this space. The bank's job should not be to catch criminals or to identify and catch money laundering. Yet they’re having to go out and identify potential money laundering operations.
“The regulators are seeing that banks are continuing to miss things. You look at what's going on in the Baltics, what's going on in the with the Russian money laundering schemes, what's going on in North Korea, and the banks are challenged because they can't always tell where the money is going to. Because of that, the regulators are recognizing that this is a difficult problem, and that we need new techniques [like AI] to address it.”
Evidence and insights
The AML space is one which is completely unlike fraud, adds Andrews. “In fraud, when an alert is generated, the banks may will make their own business decision as to how far to take those and they have built in acceptable loan loss reserves and levels and it's okay if a bank takes some losses and misses things. They build it into their profit and loss.” When it comes to AML, though, firms need to document their business decisions. “They have to maintain audits and controls around why they dismiss things, and there's more of a zero-fault tolerance, they can't just miss some major money laundering because it's not about them losing money, it's about broader societal issues.”
“The regulators and the examiners are going to come in and ask them why they dismissed certain alerts. And they can't just say, ‘because Watson told me to’ or, ‘because this model scored it low’, they have to be able to explain the results.” This, according to Andrews, is behind some the hesitancy with which financial institutions have approached artificial intelligence and machine learning solutions.
“IBM - and I think we've seen other companies following suit - has spent the past couple of years working on how to make these types of techniques and results more explainable and trainable. It’s not just generating results but generating evidence-based insights, so instead of saying ‘we think this is a false positive’ we say, ‘here are three or four insights that we've generated that make us believe that this is likely a false positive or this is likely suspicious activity’.”
A problem Andrews encounters a lot is the mistaken belief that AI is “just one thing”. “There are a broad set of techniques out there. Natural language understanding can take unstructured text, read through it and understand the concepts in there, not just the keywords. Then there’s identity resolution and network detection, being able to identify if individuals are who they say they are. We also have advanced analytics, which can generate clusters of data based on the learned behaviour of individuals over time. On top of that there’s supervised learning, where a machine can be given a ‘thumbs up’ or ‘thumbs down’ to help build its knowledge.
“There are some vendors out there that are really good at doing dynamic segmentation, some that are good at doing identity detection, and some that are doing good at doing just natural language understanding.” Each of these techniques, says Andrews have value on their own, but when combined, create “exponential value” and show what artificial intelligence can really do.