Reference data has been on a journey for many years. A decade ago, in the reconciliation-based world of post-trade, tolerated levels of inaccuracy were high single digit percentages and in some cases beyond. The financial crisis highlighted just how risk sensitive reference data had become in trading operations, internally and between counterparties and so, a decade on, a zero-tolerance approach to data accuracy is not only the expectation but now the norm in any data business that services the financial markets. However, no firm can ever warrant their data is flawless, as the market itself is not without its flaws and differences of opinion in technical and regulatory interpretations, all contrive to persist an imperfect environment. What is different, is that use of modern technology is both increasing automated processing rates and more quickly exposing data issues, leading to more timely resolutions, which means using machine intelligence first and human interaction last. The intelligent augmentation of workflow by providing the right tools is enabling the industry's demand for zero tolerance. Technology conversations, which do not include artificial intelligence, are rare these days and with the increasing adoption of AI, there is a timely convergence with improved data quality standards, where algorithms can be primed with relevant and robust data for superior and more relevant outcomes.
Internal processing targets are continuously moving toward a benchmarking zenith for those incentivised on straight through processing performance. With so many variables that can go wrong, there are only so many checks and balances that can be implemented by the human hand. The experienced practitioner would think that data quality or sanity checks have been fully prescribed and implemented in classic trade processes, which are mostly, now hardened in use. The reality is that AI is surfacing new data relationships that humans cannot compute or compete with, having reached the end of their real-world knowledge. Is it time to throw the towel in the ring and accept our human position as having physical limits and accept the positive augmentation that AI can bring to our day to day work? This is another epiphany moment akin to when the first electronic calculators immediately dislodged slide rules and logarithms in the classrooms of the mid-70s. How many fat-fingered and sight challenged students were ecstatic to ditch the old school, old tools of their forefathers, for the shiny new gadget that cut through the painstaking manual process to instantly reach the answer? That was quite a compelling use case and the use of AI today is bringing about a more serious wholesale change in the role that humans have in problem solving.
There is already evidence of natural evolution in the framework of trade processing, with real time clearing and settlement for trades, whose essential core data is present immediately post-trade and can pass through unobstructed. The focus on presenting this core data for each individual trade in real time will rely less on bulk database lookup and more on API calls to service the trade's unique data requirements, applying the most up to date data, including clearing and downstream vendor platform symbologies, regulatory reporting attributes and exchange fees to produce a comprehensive trade bundle. This efficient application of data for live trade processing and management of open positions only, will challenge existing data management and procurement structures and demand the best from new data publishing technologies and modern infrastructure services.
Imagine, if you will, a TradeBot attached to a unique trade ticket. This trade aware Bot knows what data the order under its curation needs to clear and settle and will call APIs to get and validate that data, to allow clean and straight through processing, heralding the rise of intelligent trade tickets. The start point of the TradeBot's curation could even be at quote vending, to provide intelligent pre-trade data validation and predictive risk mitigation of an intended trade and the correct allocation of the type of TradeBot.
However, maybe we should reign in our near-term expectations and acknowledge the efficiency of existing trade processes. A recent anecdotal sound bite indicated that 80 percent of world transactions were processed on mainframes yet incurring only six percent of operational costs. Not to acknowledge the benefits of invested systems, would be foolhardy, so we should therefore be focusing on intelligent augmentation of existing processes, whether or not they have been hardened in use. With the vast majority of trades experiencing clean STP rates upwards to 90 percent and beyond, the obvious application of bots to automatically resolve outtrades due to missing or incorrect data, by making calls to golden source APIs, is beginning to nudge up trade processing success and where AI/ML cannot resolve the trade break, well there's always a human who can step in and at the same time, themselves leverage UI based AI/ML tools to pack a real resolution punch.
Underpinning these intelligent toolboxes are messaging collaboration solutions, either within firms or between firms, that accelerate issue resolutions in trade processing, by connecting the right personnel and data sets on demand, in a clear example of augmented deployments. Instant access to validate or find missing data, can also be serviced from within workflow tools, from web apps to Excel add-ins and of course, bots, all offering well-crafted UIs to connect machines to humans, in a way that could be interpreted as a nod from the machines just to keep us feeling wanted, in the loop and in control. Right. Maybe the writing is on the wall that, in pursuit of zero tolerance in trade processing, the unintended outcome will be to render even these UIs unneeded.