As regulation, customer expectation and increased data volumes push legacy bank data models to breaking point, UK financial services provider Nationwide has added a real-time event-streaming platform.
“We’ve recently announced a large investment to modernise our IT estate. Kafka and this data architecture forms probably the number one priority in that investment,” said the head of application architecture at Nationwide, Robert Jackson at the Kafka summit in London this week.
Gaetan Castelein, vice president of product marketing at Confluent, the company behind Kafka, believes further digital disruption is coming to a head.
“The underlying issue for financial services is they are all under pressure to do more with valuable data,” said Castelein on the summit’s sidelines, “enterprises have this data but often in silos, how do you integrate that data from different sources into applications in real-time with their context?
“The tech giants have solved that problem, and Kafka was invented at LinkedIn to solve that very same reason,” he said, adding that, traditionally, enterprises relied on messaging systems and Extract-Transform-Load (ETL) batch processing, neither of which scale well or offer real-time capabilities.
Nationwide has committed to spend £4.1bn on digital transformation, the reason to do so now, believes Jackson, is to cope with the trinity of increased data volumes, Open Banking compliance and customer expectation, or face losing to the competition.
“The reason why this is important is because we’ve suddenly been hit with large volumes,” said Jackson, “[Open Banking] aggregators can hit us, customers can hit us, we have to react to that because our competitors will.
“People are logging into their current accounts many times a day and expect to see transactions in real time. In the same way that people expect Gmail to show them every email they’ve ever had, people expect that from their transaction history,” he said.
Kafka also has key considerations around resilience due to its inherent distributed nature, according to Confluent’s Castelein: “If one of your silos go down, your systems go down. Kafka has built in redundancy so when data is flowing through the system, if a node goes down, it doesn’t matter because your data is replicated.”
Over the past few years, the financial services sector has been hit with IT outages that have affected the majority of established banks including Barclays, HSBC, Lloyds and TSB.
“The resilience angle is that it’s deployed in multiple places. If we have some planned outage on the mainframe, it means customers can’t use their internet banking app, which they’re not happy about because Google doesn’t do that, so why should Nationwide?,” said Jackson.
Old for new
Dubbed by Jackson as a “museum of technology”, this is the existing architecture compared to the target architecture. Source: Nationwide Kafka Summit presentation.
“We call it a speed layer but you could probably translate to ‘streaming platform with other bits at the side of it’,” said Jackson, describing Nationwide’s target system architecture (right diagram). “It’s a preferred source of data for high volume read-only data requests and event sourcing. It will deliver, secure, near real-time data from our source systems to our channel applications.”
Kafka will be used in Nationwide’s speed layer, event based designs for originations journeys and high volume messaging in payments.
“Once we get into Kafka,” said Jackson, “we can then use stream processing techniques to push into other Kafka topics and then load that into databases like MongoDB and Cassandra.
“It enables you to create near real time copy of mainframe data onto the microservices. This gives you a lot of opportunities in how you structure your data, the technologies you use to manage and aggregate that data,” he said.
A detailed look at the internal plumbings of Nationwide’s data flow. Source: Nationwide Kafka Summit presentation.
Next steps for Nationwide’s event-streaming platform. Source: Nationwide Kafka Summit presentation.