In a world where successful execution is vitally important, the need for speed has been a dominant force driving technology investment over recent years. Market evolution over this period has been significant and the definition of ‘fast’ has certainly moved up a notch or two. To keep up the pace firms have progressively introduced quicker trading systems, developed shorter physical network routes to markets, and in an effort to minimise the physical distance that their trading signals need to travel, many have moved their trading infrastructure into co-located facilities situated as close as possible to the exchange’s matching engine, shaving off crucial microseconds.
However, the laws of physics limit the degree to which speed can be used as a competitive differentiator, and we are now at a level where an increasing number of firms are bumping up against the barrier that the speed of light presents. In an effort to forge ahead of the super-quick crowd, some firms are now investigating the potential presented by even faster responding technologies such as Field Programmable Gate Arrays (FPGAs). FPGAs can be programmed to respond to an order directly, reducing the distance that the trading signal needs to travel within the computer system itself. Microwave technology is also being examined, which involves sending trading signals as a radio signal, instead of through fibre optic cables. Radio signals travel faster than light does in fibre optic cables, so this approach enables the trading messages to be communicated taking the shortest possible route, a straight-line, verses having to twist and turn, as is often the case with cables that need to go around buildings and other physical obstacles, such as the Great Lakes.
However, the speed of light is an absolute limit and whilst each of these facilitating technologies present the potential for firms to trade closer to that physical speed limit, it is fair to say that the return on further investment in this area is likely to deliver ever-diminishing returns.
Fast has become the new “normal”, and the value of speed alone is becoming increasingly commoditised. This, coupled with a rise in regulation of some of the trading techniques high-frequency traders use to get an edge, has led some firms to looking at other ways in which they can retain their competitive edge and many are exploring how they can combine ultra-low latency execution with increasingly sophisticated trading strategies. This approach is really upping the pressure on trading technology to perform, as the strategy’s success is dependent upon the system’s ability to make effective trading decisions based on timely information, and the capability to then swiftly send these orders out to the market.
However, with greater sophistication often comes increased complexity, as such strategies frequently require multiple factors to be considered when making a trading decision. Unfortunately, unless carefully controlled especially in a fast environment, the byproduct of this increased complexity can be greater operational risk. Because there are more moving parts, there are often many more places where things could go wrong. As such, the ability to manage the accuracy of all decision-making inputs is essential as the potential impact of poor quality data or execution delays can very quickly turn otherwise profitable trades into losses.
To manage this risk effectively, firms need to understand not only whether the data feeding their trading decisions is arriving in a timely fashion, but also whether it is good quality. An indicator of inaccurate data could be for instance, whether a price has moved uncharacteristically from the last time it was seen, which could suggest that market data or pricing ticks may be missing from the stream. Another example could be a consolidated data stream that appears to be alive, but unbeknown to the system absorbing the information, updates from certain trading venues or individual instruments are missing. Additionally, firms operating these strategies require the confidence that their systems will be able to retain the ability to make smart trading decisions in times of high market volatility. Such environments can place trading systems under huge amounts of pressure, requiring them to cope with extremely high spikes in the volume of data that needs to be processed or to manage in a rapidly moving market. Increasingly technology and business strategy have merged and so have their associated risks.
This level of insight is required regardless of where the firm’s trading system is located. As such, this is likely to be one of the main reasons why a growing number of outsourcing service providers, whose clients depend on the fastest performing connectivity infrastructure for effective trade execution, are witnessing rising client demand for more detailed information regarding the performance of their co-located trading operations in real-time. No longer happy to settle for purely technical focused analytics, these customers want to be able to understand performance in business terms, for instance on an order, trade or client basis, so they have to hand the information needed to assess their risk levels and improve.
Market events over recent years, such as the flash crash, have been evidence of just how much disruption can be caused by automated trading practices when things don’t go as intended, even when the algorithms employed are potentially not as complex as those currently being developed, the result can be a market that nosedives. To address this concern, regulators are increasingly focused on ensuring that firms have in place effective systems and controls to manage their algorithmic trading strategies. Regulators want to prevent the sending of erroneous orders and systems potentially functioning in a way that could contribute towards a disorderly market. Additionally, there continues to be a desire for increased transparency and for the relevant authorities to gain more information on the strategies, trading parameters and limits, and risk controls in place.
A multi-faceted approach is now required to manage the additional operational risk factors presented by increased complexity at high speed. Firms need to be able to gain a real-time understanding of what is happening across the their complete trading environment and be alerted as soon as there is a risk that the quality or timeliness of the data influencing their trading decisions may be degrading. Additionally, they need to perform effective capacity planning to ensure they understand how their systems will behave under changing market conditions and be confident they will know when volumes are approaching capacity limits, so they can take a decision as to whether they should continue to trade or alternatively pull out of the market before potentially having a negative impact on it.
Conversely though, as Heisenberg’s uncertainty principle in quantum physics seeks to explain, the physical act of measuring and observing an object is necessary to understand it, however doing so can impact the state of the thing being examined. Therefore, firms pushing up at the higher end of the performance spectrum need to look at ways in which they can gain the oversight necessary to effectively control their environment, whilst minimising, or ideally removing, the impact of monitoring on the performance of their systems. Externalising oversight monitoring is key to meeting these conflicting goals, as well as providing independent and objective measures.
What is becoming increasingly apparent is that effectively balancing the desire to remain in pole position by trading fast with the very smartest algorithms, whilst controlling operational risk and meeting emerging regulatory pressures is likely to present many challenges to firms keen to retain their competitive edge in coming years.
By Steve Colwill, CEO of Velocimetrics