Empowering fintech with in-memory computing

To compete and innovate in a global, technology-driven economy, banks and financial institutions must make unprecedented amounts of data available for analysis and process billions of transactions in real and near real-time. To achieve this, many institutions are adopting cloud-based applications and outsourcing some of their data processing and analysis to fintech companies that can …

by | May 23, 2017 | GridGain Systems

To compete and innovate in a global, technology-driven economy, banks and financial institutions must make unprecedented amounts of data available for analysis and process billions of transactions in real and near real-time. To achieve this, many institutions are adopting cloud-based applications and outsourcing some of their data processing and analysis to fintech companies that can do the processing better and faster – while still complying with strict regulations. But what powers fintech? How can these companies deliver on their promise of ever greater speed and scale?

Empowering Fintech with In-Memory Computing, a new whitepaper by GridGain Systems, discusses how in-memory computing is one of the key technologies powering the Fintech revolution.

The Fintech market grew from $1.8 billion in 2010 to more than $19 billion in 2015. A major driver of this growth is banks that have turned to fintech solutions to address consumer-facing challenges, including 24/7 availability, digital banking, payment through social media connections, biometrics, contactless spending and more. At the same time, financial institutions are automating investments and trades by increasing their reliance on processor-intensive algorithms and machine learning to provide capabilities such as instant risk and sentiment analysis. The lending industry is also being disrupted by new strategies, including peer-to-peer lending and crowdfunding.

All these new strategies and services require the highest levels of speed, scalability, availability, security and flexibility. In short, they need distributed in-memory architectures.

By keeping data in RAM, in-memory computing eliminates the bottleneck that slows down other high performance computing strategies: slow disk access. Instead, an in-memory computing platform that combines data caching in RAM with data distribution and parallel processing across a computing cluster enables service providers to process transactions about 1,000 times faster, performing hundreds or even millions of transactions per second.

In-memory technology has been around for decades, but until recently, the cost of RAM was too high to implement large-scale in-memory computing infrastructures. Today, however, the cost of memory continues to tumble, dropping an average of 30 percent per year, making in-memory computing platforms more cost-effective each year. Gartner has reported that the in-memory technology market will grow to $10 billion by the end of 2019, representing 22 percent compound annual growth.

The in-memory computing platform

The evolution of in-memory computing platforms includes key capabilities that are essential for powering fintech solutions, including in-memory data grids, SQL grids, compute grids and service grids, as well as event stream processing and Apache® Hadoop® acceleration. Here are brief explanations of how fintech firms are deploying these capabilities:

a) In-memory data grids are inserted between the application and database layers to cache disk-based data from RDBMS, NoSQL, and Hadoop databases in RAM. Data grids typically replicate and partition data caches automatically across multiple nodes and enable on-demand scalability simply by adding new nodes to the cluster. Some data grids offer ACID-compliance, as well as support for all popular RDBMS.

b) In-memory SQL grids supplement or replace a disk-based RDBMS, utilising ODBC and JDBC APIs to communicate with the SQL grid. An in-memory SQL grid typically requires no custom coding and is horizontally-scalable, fault-tolerant and ANSI SQL-99 compliant. It should also support all SQL and DML commands such as SELECT, UPDATE, INSERT, MERGE and DELETE queries. Some in-memory SQL grids also support geospatial data.

c) In-memory compute grids enable distributed parallel processing of resource-intensive compute tasks. They typically offer adaptive load balancing, automatic fault tolerance, linear scalability and custom scheduling. They may also be built around a pluggable service provider interface (SPI) design to offer a direct API for Fork-Join and MapReduce processing.

d) In-memory service grids provide control over services deployed on each cluster node and guarantee the continuous availability of all deployed services in case of node failures. Most in-memory service grids can automatically deploy services on node startup, deploy multiple instances of a service, and terminate any deployed service.

e) In-memory streaming and continuous event processing establish windows for processing and run either one-time or continuous queries against these windows. The event workflow is typically customizable and is often used for real-time analytics. Data can be indexed as it is being streamed to make it possible to run extremely fast distributed SQL queries against the streaming data.

f) In-memory Apache Hadoop acceleration provides easy-to-use extensions to the disk-based Hadoop Distributed File System (HDFS) and traditional MapReduce, delivering up to ten times faster performance. The in-memory computing platform can be layered on top of an existing HDFS and used as a caching layer offering read-through and write-through, while the compute grid can run in-memory MapReduce.

In-memory computing platforms in action

There are many interesting examples of in-memory computing platforms being used in financial services. For example, Misys, a financial services software provider with over 2,000 clients, including 48 of the world’s 50 largest banks, needed to support client demand for managing huge amounts of trading and accounting data, high-speed transactions and real-time reporting. Misys built an in-memory computing platform, deploying a parallel processing cluster using commodity servers, each with 256GB RAM. The in-memory computing platform eliminated processing bottlenecks, enabled real-time processing of massive amounts of transaction data, and allowed Misys to launch its new FusionFabric Connect product, a cloud-based SaaS collection of modules that integrates many trading systems.

Another great example of the power of in-memory computing is Sberbank, the largest bank in Russia and Eastern Europe and the third largest in Europe. Sberbank expected significant growth in transaction volume and wanted to migrate to an open-source data grid architecture. The bank also needed the ability to introduce new products in hours, not weeks, and the platform had to deliver the highest levels of performance, reliability and scalability while lowering costs. Sberbank deployed an in-memory computing platform that delivered these capabilities, along with redundancy and high availability. In testing, the bank was able to generate up to one billion transactions per second on an array of ten Dell R610 blades with one terabyte of memory – assembled at a cost of just $25,000.

Conclusion

Fintech is one of the primary verticals driving digital transformation. As such, it must be a leader in delivering the speed and scale that banks and financial services firms (as well as consumers and enterprises) expect from modern, web-scale applications. The rapid adoption of in-memory computing platforms reflects the ability of this technology to process more transactions faster than ever and analyze huge amounts of data in real and near real-time, enabling an entire new generation of services.

Categories:

Resources

How To Maximize Your Performance Through Collateral Management

Other | Asset management How To Maximize Your Performance Through Collateral Management

Meradia
A new era for fixed income trading

Other | Asset management A new era for fixed income trading

TS Imagine
Key Operational Considerations for Crypto Investment Managers

Other | Asset management Key Operational Considerations for Crypto Investment Managers

Meradia
Anchoring ESG data in Investment Operations

White Paper | Asset management Anchoring ESG data in Investment Operations

alveo