Financial services take lessons from machine learning

By Emma Olsson | 19 November 2019

Open Banking promises increased access to customer data, but with newfound transparency comes a greater need to understand how that data is utilised. Together with developments in machine learning (ML), markets are changing.

So far, ML has been used in financial services to manage content and data, detect fraud and money laundering, facilitate biometric identification, trade via algorithms, and assist with regulatory compliance. A survey by the Bank of England, undertaken in the first half of this year, found that two-thirds of UK financial services companies are now using ML. Nineteen percent of companies surveyed have or are creating centres for promoting the technology internally. The same survey found that anti-money laundering (AML) and fraud detection, along with customer-facing applications such as chatbots, utilised machine learning the most.

With the second Payment Services Directive (PSD2) in the EU and Open Banking gaining traction in the UK, the opportunities for more complex ML applications have never been greater. But there are hurdles. Even as directives such as the EU’s General Data Protection Regulation (GDPR) strive to ensure the ethical use of customer data, the legitimate interpretation of this data by ML is not assured. A 2018 Deloitte study outlined concerns around analytics and AI in financial services:

“The heightened ethical responsibilities for use of data includes how data is interpreted via algorithms. This requires an understanding of any unintended consequences and potential biases in algorithms.”

The study points to the example of algorithmic pricing inadvertently discriminating against certain classes of people given the socially sensitive nature of the data involved. Nonetheless, ML adoption continues to grow in customer services and fraud detection.

On November 18, the UK’s Information Commissioner’s Office (ICO) published a blog on data ethics and the digital economy, announcing the appointment of their first ever data ethics adviser.

In May, the Financial Conduct Authority (FCA) argued for what the regulator called “sufficient interpretability”:

“This is the point where supply and demand meet and where society finds the right balance between the benefits of machine learning, and AI more generally, and the need to make sense of its predictions and decisions.”

The FCA argued that explanations can never be absolute, and that they come at the expense of efficiency in machine learning.

This year, market participants have prioritised data management over factors such as customer experience for the first time, according to publisher of the annual Digital Banking Report Jim Marous at the Open Banking Expo in London on November 13.

Deep learning becomes the norm

Deep learning (DL) is a subset of machine learning, which itself is a subset of AI. In standard machine learning, an algorithm will improve in performance as it is exposed to more data over time. DL takes this a step further, employing “multi-layered neural networks” which learn from vast amounts of data. The term comes from neural networks (NNs) with more than two, “deep” hidden layers. For ML to reach its maximum potential, a great amount of data is needed. The more data, the more precise the ML capabilities. With access to data sets increasing globally, the capabilities for DL are on the rise. As depicted in a BoE survey last month, customer support and fraud detection see the greatest amount of ML and DL implementation.

Customer support remains a key concern

Like ML, DL isn’t a new concept, having been used outside of finance for the last decade. Apple’s Siri (2011) and Amazon’s Alexa (2012) both utilise DL in the form of natural language processing. While not new in retail sectors, deep learning is booming in finance now – and will continue to boom in the future – due mainly to the expansion of big data, an umbrella term encompassing customer demographics, site clicks, a customer’s current product ownership, and consumption records, according to the MIT Initiative on the Digital Economy. The larger the data sets, the more ML and DL can learn and attribute, rather than having to be programmed explicitly through standard AI algorithms. Optimising customer experience is the greatest potential reward in this case, through DL’s ability to predict what the customer may want.

Fraud detection use grows

What distinguishes DL from ML is its flexibility. DL is able to create robust, bespoke models for specific tasks, which fraud detection tech firms such as Ravelin deem helpful for preventing fraud. In the past, fraud prevention relied on building algorithms based on rules, triggering ML procedures to detect certain red flags. The problem with this method has been its one-size-fit-all approach to detection; for example, a rule may flag large transactions from a previously unknown location, which could block out many genuine customers, according to cybersecurity analysis firm SecurityIntelligence. This creates a balance between a desire to provide optimal customer service and that to prevent fraud, a relationship that renders financial service providers in a state of constant flux. As the BoE report shows, with two-thirds of UK financial service companies using ML, there exists a great deal of friction in the customer experience. ML may be quicker, more efficient, and more cost effective than a team of human analysts, but customer frustration is a serious problem: a 2015 report from Javelin Strategy and Research estimated that only one in five fraud predictions is correct, and errors can cost a bank $118bn in lost revenue.

Research firms such as Gartner claim that increasing customer expectation in e-commerce makes DL essential to its future. Officially in effect since January 2018, open banking has not caused any drastic changes; rather, it has ushered in slow, incremental changes to the UK financial system. It is particularly these kinds of fluctuating landscapes that DL is best adapted to assist with. According to data analytics company FICO, the unsupervised models provided by DL are best suited for changing environments.

FICO’s PSD2 whitepaper from August 2017 distinguishes between supervised and unsupervised ML in fraud detection, and states that both are useful. Supervised ML is often used in deciding whether to accept or decline a payment. These models employ neural networks, but they also rely on human operators to categorise data in advance.

Unsupervised ML - the type compatible with DL - works with data that has not been previously categorised by a human operator, attempting to spot patterns and outliers on its own. This type is particularly useful under PSD2, where a quick reaction time to new threats is necessary.

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development