“There’s a lot of hand waving with machine learning in Trade Surveillance”

bobsguide spoke to Dermot Harriss, Senior Vice President of Regulatory Solutions at OneMarketData, about how machine learning and new regulations are changing the landscape of trade surveillance, and how a balanced mix of data management via machine learning can reduce false positives and minimise costs for firms. How did you get into trade surveillance? I …

by | September 4, 2017 | bobsguide

bobsguide spoke to Dermot Harriss, Senior Vice President of Regulatory Solutions at OneMarketData, about how machine learning and new regulations are changing the landscape of trade surveillance, and how a balanced mix of data management via machine learning can reduce false positives and minimise costs for firms.

How did you get into trade surveillance?

I was with Goldman Sachs for 15 years, joining the statistical arbitrage group. Some colleagues of mine there built the infrastructure. In fact, my colleagues Leonid Frants and Oleg Yatvitskiy built the infrastructure collecting tick data and analytics, which became the platform used by Goldman Sachs to this day for higher frequency data collection and analysis. My colleagues left and founded OneMarketData whilst I stayed on at Goldman Sachs in various roles related to market-making as well as a strategist and trader.

I then rejoined the old team in OneMarketData in 2015. At the time, the group was thinking of getting into the solution space, creating up pre-packaged solutions for customers, including trade practice surveillance.

When you started out, the trade surveillance landscape was quite different from what it is now, particularly with new regulatory changes. What sort of change have you seen to trade surveillance and what would you say are the key contributing factors?

I would say the primary key factor has been the much greater coordination between regulatory bodies internationally. You used to ‘do your own thing’ when it was a fairly quiet regulatory landscape and now there’s a lot of communication. There’s a lot of convergence with the regulations, especially for the US, Canadian and European regulation. For instance, definitions of various manipulative practices have converged pretty strongly – they all look over each other’s shoulders.

The second key factor has been MiFID II and MAR. That kind of regulatory package has really driven activity in Europe. I would say most participants in Europe who did not have a US parent company had almost no monitoring or analysis capabilities related to trade practice surveillance. Now pretty much all of them are tooled up to some degree, still less than you would expect considering MAR has been in place for a year now.

There’s been more activity coming in to MiFID II than there was around MAR. Quite a few market participants in Europe saw surveillance as being more a part of the whole MiFID II package than of MAR. Many of them believed they would only be tweaking their in-house system, playing a wait-and-see game after MAR, with more of an eye to making substantial changes for MiFID II. That’s reflected in the pickup we’ve seen, not just in Best Ex under MiFID II, which is what we would have expected, but we’ve actually seen more activity in the trade surveillance space, coming into the MiFID II deadline than we did before MAR.

In that case, do you see surveillance technology as a quick fix or more as a longer term solution? And does that differ in requirement between the sell side and buy side?

The regulation is pretty uniform across the buy versus sell side even though some buy side firms haven’t quite realised they have a lot of obligations. The difference lies in the approach. The sell side is more likely to build on premise and opt for more sophisticated implementation. The buy side is looking to tick the box and wondering how it possibly applies to them and not the brokers. The buy side quite often deals with illiquid instruments that trade infrequently so there’s a degree of head scratching about how the MAR requirements will apply to them.

On the sell side, they know what they need to do and they’re doing it, some a little less aggressively than others. MAR is broadly clear, although a few of the manipulative practices are not well defined; for instance, nobody really knows what ramping is. But there’ll be continuing refinement in what the requirements really are after MiFID II comes in.

There’s been a bit of a shake-up in the vendor market too and that’s contributing to our uptake in business. Over the last few years quite a few vendors have popped up believing that the surveillance space was hot, and some had been struggling in Europe. Ancoa, for instance, went bankrupt and their customers, who signed up around MAR, are now looking to switch. The US, has seen some vendors, e.g. specialists in application of machine learning to surveillance, come and go.

We’ve seen the quick fix clients around MAR and now we see the more nuanced clients looking for a more permanent and sustainable solution coming in to MiFID II.

Would you say then, that your product is sustainable?

We believe we have a good, sustainable solution. That’s mainly down to OneMarketData’s 13 years’ experience as a data management platform company handling high volume data. We built the surveillance on top of a pretty solid, tried and tested, widely used and deployed infrastructure. With surveillance, we haven’t run into any problems with the underlying data management, it’s more about getting the nuance of the alert types right.

Some of our competitors, especially the new ones who entered with startup capital and ideas for machine learning applications, didn’t have a solid data management platform underneath, so they struggled to deliver what they promised. We have that data management experience and insightful customer base which makes us a sustainable player.

Would you say that the underlying data management experience is your USP? What would you consider are your key functions?

The key function is the data management side, so we can handle the volume; OneTick can handle all options quoting and trading activity in the US, for instance.

The second is our false positive rate. The true cost of surveillance is in the false positive rate, because a high false positive rate means a high operational cost. We focus on minimising false positives by doing simple things that even some of our older, more entrenched competitors don’t do. For instance, we give our customers the ability to modify, test and change all of the parameters and alert types. They don’t have to wait for us, they can tune the parameters to their own requirements and that lowers the false positives.

Our team also has strong market structure experience so we understand how the regulator looks at compliance, surveillance and manipulative practices. We’re able to come up with a way of breaking down alerts into fundamental manipulative behaviours. We put that insight into the way we structure alerts and design them.

Consequently, we’ve ended up with alerts that are able to distinguish between noise and real, bad practice by observing patterns. I’m not saying we’re the only surveillance firm with that depth of experience but there’s definitely a few smaller players who start from a purely technological perspective and don’t understand the principles.

So a purely machine learning approach does not necessarily work. In that case, where can machine learning aid trade surveillance and what does it have in store for the future?

There’s a lot of hand waving about it in trade surveillance. It’s a very interesting field and we do have a machine learning team at OneMarketData, but the most important aspects of the trade surveillance system don’t really need machine learning.

And that’s because it’s a reasonably simple alert to detect the patterns of manipulative practices that regulators are looking at and building evidence on. That alert doesn’t really require advanced statistical techniques.

There are areas where advanced statistical techniques like machine learning that can help. One of those areas we’re focused on is trader profiling that recognises trade deviations. This type of ‘unsupervised learning’ analyses the data, comes up with the metrics of what a normal profile resembles and, from there, figures out what abnormal is.

Machine learning can also help by learning from all the decisions made by the human user and gradually adjusting alert parameters to minimise false positives. We’re beginning to look at that approach, but it needs a large dataset and we don’t always have the data because sometimes our customers keep the data. So you can’t have effective machine learning without data management too.

As for the future, with the amount of oversight and coordination between regulators and the requirement for all market participants to record market activities and the gradual extensions of surveillance to OTT products, trade surveillance is going to be a busy area for some time to come.



Regulatory reporting: 7 Questions with Philip Flood, Gresham Technologies

Other | Behavior detection & predictive analytics Regulatory reporting: 7 Questions with Philip Flood, Gresham Technologies

Gresham Technologies
Real-time payments tech put pressure on banks

Best Practice | Behavior detection & predictive analytics Real-time payments tech put pressure on banks

Managed Services in 2021: Poised for Lift-Off

Best Practice | Behavior detection & predictive analytics Managed Services in 2021: Poised for Lift-Off

SmartStream Technologies