There’s no escaping the great swathes of regulatory change impacting Europe’s trading community. Whilst compliance requires eye-watering budgets, by going slightly beyond what the regulations stipulate in certain areas some firms are discovering they can actually derive significant additional business benefit from what are, on the face of it, largely obligatory investments. In the case of preparing for MiFID II’s algorithmic testing rules, these benefits can translate into the opportunity to conduct more accurate and thus profitable trading.
The algos in play today span from the relatively simple to the spectacularly complex. Whilst few would disagree with the benefits algorithmic trading has introduced in terms of cutting transaction costs, facilitating greater efficiency and tremendous scalability, regulators are understandably keen to manage the possible risks presented.
Focusing on how algorithms are developed, deployed and subsequently updated, the overall goal of the MiFID II testing requirements is to prevent an algo negatively impacting the fair and orderly functioning of the market. The regulators want to ensure that trading systems and algorithms do ‘not contribute to disorderly trading conditions and can continue to work effectively in stressed market conditions…’ (RTS 6, Article 5, 4d). As such, the requirements make investment firms responsible for ensuring that the algorithms they operate, along with the systems that support them, are fully tested in a non-live environment and properly maintained. So ultimately a firm can be confident that its trading systems and algorithms can successfully consume market data, react to the market and perform well in both ordinary and extraordinary situations.
However, the risk of an algorithmic trading system reacting in an unexpected manner doesn’t just have regulatory implications, it can also have very serious financial ramifications on the firm itself. As such, upping the effectiveness of the tests algorithms and all related components are put through can considerably reduce the possibility of unintended trading losses. So how can firms increase the robustness of their testing practices to minimise the potential of costly consequences?
Implementing testing standards across all components
Whilst components purchased from third parties have undoubtedly been tested as part of their development process, they are unlikely to have been evaluated for every single situation the purchasing firm would want to test for. Take for instance a feed handler deployed alongside algorithmic trading systems and smart order routers. Each component may have been independently tested against a specific set of scenarios, but now they’re fully interdependent and potentially receiving multiple market data feeds and experiencing different types of trading practices - how these components react to changing conditions will have knock-on effects.
I realise it’s not realistic to envisage, and then test for, every single possible set of circumstances. However, if a firm can access a library of test scenarios, that it can continuously update as increasingly challenging situations are experienced or become conceivable, and ensures all components are rigorously tested against all of these scenarios, the chance of being caught out by an unusual combination of market events can be dramatically reduced.
Testing using real market data
Market data drives most automated trading decisions. Therefore, the more realistic the data employed in pre-production environments, the more likely the firm is to be able to effectively test how their algorithms and supporting systems will react when faced with the possible variances in volume, content and rate that real-world market data presents.
The market data being consumed in a trading environment can include very unusual combinations of messages, new trading symbols, variances in the sequence of events and a degree of burstiness that can really put an algorithm, and its surrounding technical environment, through its paces. As such, if investment firms are able to run tests employing actual market data, versus just synthetic or test data, they can gain a much more representative understanding of how stable their algorithms will prove to be when deployed into production and experience real shifting market conditions. Given the time-critical nature of some algorithms and the load characteristics that only occur in fast-moving volatile conditions, it’s extremely important that the data’s real and the timing of the data events that drive the test are as accurate as technically possible. Replay systems are now available that can reproduce market events to an accuracy of +/-20 nanoseconds.
Precisely recreating a situation to reproduce a problem in HD
When trading issues emerge all eyes quickly turn to operational teams to identify the root cause and effectively solve the problem. Being able to accurately reconstruct the situation is key to doing so quickly and reducing incurred costs. However, even though some firms operate a replica of their full production environment in their testing facility they can still struggle to access a copy of the exact market data feeds received when the incident happened. Without this data, firms can struggle to quickly understand why certain components reacted in a particular way or robustly test whether the fix they’ve implemented will actually prevent future costly mistakes should a similar set of market events reoccur.
Employing artificial test data can prove largely ineffective, as it’s highly unlikely to accurately represent the actual data that was being received when the incident happened. Likewise, it’s not just a case of rerunning the data from the venue that was being traded on, what’s needed is all of the data that was being consumed from every venue. New services are now making it possible for firms to continuously capture a copy of all data being received and then precisely replay any given period on-demand, enabling firms to substantially increase testing efficiencies and overall accuracy.
Testing for what matters to your firm
Whilst MiFID II requires trading venues to provide testing facilities, investment firms can benefit significantly from the functionality to also independently shape the tests they’d like to run and by creating tailored testing scenarios based on their business needs, instead of being limited to those offered by a trading venue. These tests could be customised to include, for instance, the ability to assess how responsive an algorithm proves to be when market data rates soar versus how it copes under normal circumstances. They can then introduce enhancements accordingly.
Facilitating precisely repeatable tests
Being able to accurately reconstruct precisely repeatable testing conditions enables firms to really ensure they understand exactly how particular components are likely to respond under certain circumstances. Firms able to run these tests in-house as many times as they like can avoid the constraints that external factors, such as a venue’s limited out-of-hours testing period, can introduce. By recording and replaying good days and bad days, algorithms and infrastructure can be fine-tuned to turn a loss-making day into a profit-making one for the future. As such, by making the right investment, not only can regulatory obligations be met, but firms can also drive their business forward too.
Therefore, whilst the regulators focus on laying out the minimum parameters a firm should follow to avoid algorithms negatively impacting market orderliness, by taking steps to increase the testing effectiveness beyond the mandated requirements, firms can additionally benefit from the ability to conduct more precise, robust and repeatable tests, tailored to their specific needs. In doing so, dramatically reducing the potential for an algorithm or its supporting systems to respond in an unexpected manner to future market events.
By Steve Rodgers, Head of Engineering, Velocimetrics.