MiFID II: Controlling and testing algorithms

By Steve Colwill | 21 May 2015

Whilst regulators appear to recognise the value algorithmic trading presents, significant concerns have also been raised regarding the safety of algorithmic practices and the market risks a poorly coded or deployed algorithm could present. This has led to many pages being penned on the subject in recent regulatory texts.

Politicians and regulators quite rightly want to create a safer and more responsible financial system and building on existing regulatory guidance, MiFID II will enhance systems, processes and controls for investment firms and trading venues engaging in algorithmic trading. 

Most measures are just good common sense, requiring for instance, investment firms conducting algorithmic trading to have ‘effective systems and risk controls’ in place. To ensure trading systems are resilient with sufficient capacity and ‘subject to appropriate trading thresholds and limits and prevent the sending of erroneous orders’. Its systems are ‘fully tested and properly monitored’.  In turn, trading venues are asked to provide environments where algorithms can be tested, to ensure algorithmic trading systems don’t create or contribute to disorderly trading on the market and to manage any conditions that arise.

Whilst we await further detail anticipated in the Level 2 guidance, it would be fair to reasonably expect the players to already be doing most of what the rules will require - at least to a significant degree. However, it’s important to remember the regulations only represent the minimum standards reasonably expected to avoid algorithmic trading impacting market orderliness. 

No one wants to be at the centre of a crisis similar to that which Knight Capital, a firm very experienced in algorithmic trading, found itself in, back in August 2012, especially with the Financial Conduct Authority’s (FCA’s) fines now appearing to increase in frequency and magnitude, along with a growing focus on individual responsibility. 

Luckily such scenarios don’t come around all that often, however large IPOs, double and triple witching days and other more unusual events do, and these can really test how an algorithm responds to a very unique set of circumstances.  

The inability to accurately predict and control how an algorithm behaves when faced with such scenarios can have serious financial ramifications. So what can be done to ensure the unexpected can be better managed when it comes to algorithmic trading?

Testing algorithms – where it all begins

Firms need to successfully test their algorithms in non-live environments that accurately reflect real-world conditions. The issue is some firms struggle to meaningfully emulate market subtleties in pre-production due to the limitations of synthetic data or exchange data from a non-live feed available only during weekend or overnight testing periods. As such, firms can find themselves deploying algorithms with testing potentially disproportionate to the risk they present.

Critics of the MiFID II mandate requiring trading venues to provide non-live algorithmic testing facilities, have argued the proposal could prove expensive and unnecessary - achieving little more than investment firms can already accomplish doing their own back testing. This may or may not be the right approach, but either way further steps could be taken by some firms to ensure more vigorous assessment.

Accessing authentic data to test against is fundamental; data that allows volume volatility, rate inconsistences and differences in event sequences to be effectively evaluated – doing so is the difference between real world testing and testing in a vacuum. 

It’s also about much more than just testing on a single feed or market basis, or testing an algorithm in isolation on a venue.  Firms would significantly benefit from being able to create realistic conditions to test how for instance, their smart order router will react to multiple market feeds in different circumstances and how the systems it interacts with, respond in turn. In doing so, the potential wider impact of different developments could be better understood and more accurate control mechanisms put in place.

There is the practical argument that you can’t test for every possible scenario, as the one that throws your algorithm off is going to be the completely unforeseen tsunami. However, it’s fair to say that by employing real-world data and throwing multiple states at your systems back-to-back, you are significantly more likely to cover considerably more scenario variances than even the most imaginative developer is likely to be able to think up with a synthetic alternative. 

Operational oversight

Once an algorithm is in play, firms need to achieve the degree of operational oversight necessary to ensure they have a detailed understanding of their trading activities as they are happening; that their algorithms are responding to market events as expected and that they in turn are not trading in a way that could be deemed to impact the fairness and orderliness of the market.

This involves being able to immediately detect errors or delays in the market data inputs being used to formulate algorithmic trading decisions. This really means being able to distinguish the subtleties that could generate significant margins of error. Its much more than just being able to detect whether a feed is live or not, its whether a particular market, matching engine or instrument that should be included in the feed has been lost, whilst all other elements appear to be ticking as normal. Detection in real-time is essential as if not multi-million dollar trading losses can rack up in a matter of minutes.

It also involves having a detailed understanding of how all of the different systems and processes involved are operating. It’s about being able to instantly identify if something is amiss or a trading threshold is close to being breached either by the firm themselves or their direct electronic access clients. Once detected, rapidly pinpointing an issue’s source is essential for effective management and control. 

Governance: Gaining a 360 degrees view

Hitting a kill switch will always be the very last resort. Firms need to have in place the detailed oversight that will provide layer after layer of risk management, to effectively detect, control and head off emerging issues - hopefully avoiding the need to ever flick the switch. 

Whilst technology may be part of the answer here, it all starts with humans and making sure the right ones are involved in different activities from the very beginning. It’s about determining where the risks may be and then putting in place appropriate and proportional technical controls and human procedures.

Take the development of algorithms for example, to effectively evaluate the potential risks you need to have the right mix of people sitting around the table to start with to gain a 360-degree risk view. Once deployed those in the pilot seat need access to real-time information, so if the unforeseen emerges they have the necessary decision-making information at their fingertips to effectively manage the risk.

We’re all responsible

MiFID II clarifies that it is very much the responsibility of every market participant to ensure market wellbeing, to monitor their trading activities and prevent negatively impacting the fair and orderliness of the market. If firms are to effectively achieve this goal, they need to look holistically across at their algorithmic trading operations and try to detect possible sources of risk. 

Putting in place measures to ensure firms can more effectively test, monitor and control how algorithms react to unpredictable developments is essential. As it could be fair to say, that the next big issue to knock the markets off kilter might very easily be a type of failure we haven’t seen before.
 

By Steve Colwill, CEO, Velocimetrics

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development