Meeting the Challenges of High-Frequency Trading With In-Memory Computing

By Nikita Ivanov | 18 October 2016

High-frequency, algorithmic, and quantitative trading is becoming the norm. Financial services firms are looking for ways to gain a technical edge to reduce latency, increase performance, and handle ever-greater analytical complexity. At the same time, they must maintain their transactional-level compliance and risk-management controls. To meet these challenges, firms have now started adding in-memory computing to their technology toolbox. For a deep dive into this topic, read “Driving High-Frequency Trading with In-Memory Computing,” a new white paper by GridGain Systems, the leading provider of open source in-memory computing solutions for the financial services industry.

High-Frequency Trading Technology Demands and In-Memory Computing

High-frequency securities trading utilises rules-based, high-speed strategies to perform multiple simultaneous trades – with all the decisions driven by computerised, quantitative models. Essentially, automated high-frequency trading is based on computer programs analysing the market situation and making decisions on the right time to buy, sell, or perform other financial activities. The idea is to predict the market's movements and take actions that deliver benefits if and when those predictions come true – and if the technology is fast enough to take advantage of the situation ahead of other traders.

High-frequency trading became very popular in the 2008-2009 timeframe, a period of market instability and volatility that culminated in a financial crisis. When prices move quickly, trading firms (like Goldman Sachs at the time) that can be equally quick about processing market information and executing trades can make substantial profits.

While high-frequency trading encompasses a variety of specific approaches – market-making, market-taker, arbitrage, pairs trading, and more – all of them involve the following basic steps:

  1. Obtain market information – as fast as possible
  2. Process the information through prediction algorithms – as fast as possible
  3. Execute trades based on the information – as fast as possible
  4. Fine-tune the prediction algorithms based on how they perform

With high-frequency trading, the greater the speed of the first three steps, the more profitable the trades. The fourth step, fine-tuning the algorithms based on transaction-cost analysis and back-testing, is also extremely important. It requires analysing how well the algorithms perform based on real-time data from the market and then determining whether the algorithms can be improved or whether something else should be done differently.

The first three steps are transaction-intensive processes, requiring online transaction processing (OLTP) technology. The last step is primarily an analytics-heavy process, requiring online analytical processing (OLAP) technology. For high-frequency trading to be profitable, firms need both paradigms, OLTP and OLAP, to work together with the greatest possible performance. Achieving this environment requires multiple technologies.

Technologies Used in High-Speed Trading

Implementing high-frequency trading requires a very strong, very fast infrastructure. Currently, the technologies firms use to create this infrastructure (typically in some combination) include:

  • Dark fiber cabling – A leased, privately operated optical-fiber infrastructure that directly connects the decision engine to the backbone of the exchange.
  • Exchange co-location – Putting the trading computer with the decision engine in the same data center where the exchange hosts its matching engine.
  • Hardware-based programming logic – Putting the decision logic onto hardware – field programmable gate arrays (FPGAs) and graphics processing units (GPUs) – instead of in software.
  • Apache® Hadoop™ with MapReduce – Using Hadoop to facilitate rapid data transfer among nodes and MapReduce to divide applications into numerous small blocks that can be run on any node.
  • Complex Event Processing (CEP) – Processing multiple streams of data at the same time using advanced mathematical data structures and algorithms to infer patterns or events.
  • Parallel processing clusters – Using network clusters to expedite high-frequency trading, especially when dealing with heavy analytics, by distributing processing across multiple nodes in the cluster.
  • In-memory computing – Using in-memory computing to keep data in memory instead of on disk to provide massive improvements in performance and extreme scalability to handle massive sets of data. In-memory computing is one of the newest and most successful technologies being used for high-frequency trading and other big data applications.

A Closer Look at In-Memory Computing

In-memory computing is an essential technology for use cases involving the analysis of deep levels of data, such as level-one and level-two market data, as well as for use cases requiring extremely fast response times. Placing 100 percent of the required data into RAM can provide response times that are 1,000 to 1,000,000 times faster than traditional disk-based approaches.

In addition to putting data in memory for faster access, in-memory computing platforms leverage clustered memory and massive parallel processing which is favored in today’s high-frequency trading. Building distributed processing and clustered processing into memory makes performance much faster, even compared to software-based parallel processing. These hybrid transactional/analytical processing (HTAP) solutions can reduce costs and complexity and improve performance for use cases that require OLTP and OLAP. Basically, in-memory computing platforms turn “big data” into “fast data.”

A common use case for in-memory computing is trading platforms with high-volume transactions, algorithmic trading and ultra-low latency requirements. Large banks, hedge funds and financial technology firms use the technology to process and analyse large amounts of data for decision making. Performing compliance checks using in-memory computing also offers a significant edge over firms still using a software-based approach to compliance controls.

Sberbank, the largest bank in Russia and the third largest bank in Europe, has documented its use of in-memory computing to demonstrate 1 billion transactions per second using only 10 Dell blades with a combined memory of one terabyte. Significantly, using modern in-memory computing technology, the complete hardware system cost only about $25,000 instead of the million-dollar-plus price tag of older in-memory technology used for high-frequency and algorithmic trading.

As competition intensifies in the field of high-frequency trading, financial services firms need a new level of transactional speed and analytic power to beat their competition. The decreasing cost of in-memory computing makes it an essential technology for these firms to add to their infrastructure to give them the high performance edge they need. 

By Nikita Ivanov, Founder & CTO, GridGain Systems.