Real-time transaction tracking: Is big data the holy grail?

By Steve Colwill | 21 July 2016

A large proportion of banks are currently exploring how they can more effectively track and analyse the transactions moving across end-to-end trading or payment processes.  Many are finding it to be a vital requirement in meeting the demands for greater transparency driven by regulators, client SLA’s, internal incident management groups and the business at large.

For most firms the goal is to do so in real-time, so they can accurately measure and improve their business’s effectiveness and proactively control situations to avoid unnecessary financial loss. In essence, when an incident occurs they want to be able to quickly pinpoint exactly where the problem lies, identify the transactions impacted, where they are impacted and the clients they belong to. By rapidly assimilating this actionable insight, relevant teams can then quickly proceed with the issue’s resolution and remediation - minimising the impact on cost, resources and client experience.

Keen to reap these benefits, several institutions are currently initiating real-time transaction tracking projects. Many are starting by looking at big data, after all it’s now a relatively mature and powerful technology that’s well understood by numerous experts.  However, as countless firms are now discovering (unfortunately often having already gone, at least, part of the way down this costly rabbit-hole) whilst using big data ticks many boxes, there are fundamental differences between what it can deliver and what real-time transaction tracking seeks to achieve.

Take for example the fact that although big data enables firms to pump huge quantities of information into their big data lakes, query this data and then subsequently surface valuable business analytics, big data tools fail to provide an understanding of exactly what’s happening here and now to the transactions being processed – the core objective of real-time transaction tracking. Or that the firms wanting to employ real-time transaction tracking usually understand the way in which data should be being processed and specifically want to check whether this is actually what’s happening. Whereas, big data solutions are usually built using generic tools designed to help firms interpret data and tell them what they don’t already know.  Here are just some of the issues these differences are generating that’s causing firms to rethink their approach:

Financial transactions are not static pieces of data

Trades and payments are not individual pieces of data that can be simply scooped out of a process and fire-hosed to a big data solution for analysis. More commonly a transaction is the result of a series of interconnected events. Take for instance a market data tick that proves to be the impetus for an algo engine to generate a parent order that’s executed as a series of child orders filled on multiple trading venues. Tracking and analysing this trade end-to-end would involve tracing the path taken by the original market data tick, along with all of the orders and fills and then tying this information together in chronological order.  In doing so, the complete chain of events can be assessed against thresholds and past trends.

Simply capturing and pushing tonnes of unstructured data as discrete elements into a big data solution, so it can then be reconstructed and analysed, understandably presents many firms with huge challenges. This is especially the case when the data relationships are complex. Firms that start down this route often find they’re repeating the very mistakes that led to failed data warehousing projects 20 years ago (big data’s predecessor) – just this time using newer technologies. Lots of data is pumped into lakes that the firm then struggles to analyse and gain meaningful results from because it’s poor quality and unstructured.

Big data isn’t real-time analysis

Due to the very fact that information is captured and then sent to a big data solution for interpretation and reconstruction, means it can’t be analysed in real-time. As such, if something is amiss firms will always experience a delay in being alerted. So by the time staff have detected an embryonic issue, and accessed the actionable insights necessary to determine how to respond, the problem is likely to have evolved and so their ability to proactively avoid it occurring diminishes. In today’s fast moving trading environment, and increasingly fast payments world, such delays can cause costs to rapidly accumulate along with client SLA and regulatory breaches.

Big data often employs query driven databases

Big data databases are often query-driven, whereas transactions are event-driven and by their very nature move across systems, so knowing a transaction hasn’t got somewhere can be just as important as knowing it has. For example, it’s the consumption of a market data tick meeting a certain set of parameters that will drive an algo engine to execute an order. If something then happens that means the order isn’t sent onto the next stage, operational teams need to be notified of the non-arrival so they can proactively manage the situation. When query-driven databases are used, determining this to be the case requires pre-programing the database so it knows to query whether an item has arrived at every single point it should across the end-to-end process. In complex environments, where huge volumes of transactions are being processed, raising the extensive number of alerts necessary to do so can prove incredibly resource intensive.

Furthermore, the business and algorithmic rules that define these paths are prone to frequent change as systems and applications are updated.  Therefore, keeping these query points up-to-date can generate a significant maintenance overhead.

If big data isn’t the answer, what is?

Recent advancements in technology are now enabling firms to gain the insight they need to accurately comprehend what’s happening, as it’s happening to the transactions being processed whilst incorporating the persistence and post-event analysis capabilities that they know and love big data for.

To achieve this goal some firms are exploring in-memory, event-driven tracking techniques that have been specifically designed to capture, interpret, accurately tie together and analyse the data related to every transaction moving through the end-to-end process contemporaneously. Because all assessment is done in real-time, the technology’s adopters are instantly alerted to early warning signs so issues can be detected the very moment they start to emerge. Also, firms can gain the actionable insight necessary to understand where the problem has occurred, which transactions have been impacted, where they’ve been impacted and the clients they belong to so effective decisions can be quickly taken and situations brought under control before they’ve have chance to develop. Additionally, with this approach, firms can gain an ‘as it’s happening’ understanding of their performance against business goals, regulatory commitments and client SLAs. These structured chains of complex data can then be sent to the firm’s big data solution for long-term storage. This enables all data pertaining to a given transaction to be easily retrieved and reconstructed for what big data best facilitates - powerful post-event analysis.

Moreover, because such tracking systems are event-driven they innately need to be able to automatically change how they track data in tune with evolving business rules, this flexibility eliminates the maintenance overhead necessary if query-driven approaches are employed.

In this way, firms are now successfully benefiting from the advantages of real-time transaction tracking, without the limitations that just using big data technologies can present.

By Steve Colwill, CEO, Velocimetrics.

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development