In-memory Computing: Boosting the performance of financial services IoT projects

By Nikita Ivanov | 26 July 2017

Few would doubt that the Internet of Things (IoT) will disrupt everything we know about data transaction processing and analytics. According to Gartner, by 2020, 26 billion IoT devices will be creating data that will be communicated over the Internet for analysis – often in real time. And that figure doesn’t even include computers and smartphones.

The financial services industry is already embracing IoT with impressive initiatives. Banks are beginning to install location-aware ATMs that pre-load account information from approaching customers who are identified by their cell phones. Auto insurance providers use telematics to monitor driver behavior. And health insurance providers are eagerly looking to connect to fitness tracking devices.

As these initiatives take hold and begin to scale, application performance will become a critical success factor in customer satisfaction and, in some cases, safety (of patients and drivers, for example). To ensure applications are able to ingest, store, and analyse data at the required speed and scale, the infrastructure must incorporate ubiquitous network connectivity, highly efficient sensors and devices, and the following application infrastructure characteristics:

a) Fast and scalable computational, analytical and back-end storage systems

b) High availability

c) Streaming real-time data collection

d) The ability to adjust immediately to variable workloads

e) Interoperability

f) The best possible security

To create this application infrastructure, forward-thinking companies are turning to in-memory computing. By keeping data in RAM, in-memory computing eliminates slow disk access, which can create bottlenecks that choke most high performance computing strategies. Applications utilizing in-memory computing platforms – which typically combine caching data in RAM with parallel processing of data distributed across a computing cluster – are about 1,000 times faster than the same applications built on disk-based databases. These applications have frequently achieved processing speeds of hundreds of thousands or even millions of transactions per second.

The desire to implement in-memory technology has been around for decades. Technologists know that working with data stored in memory removes the lags inherent in moving data stored on disk back and forth to RAM. But until recently, memory costs were too high for large-scale projects in many applications. The cost of memory continues its steady decline, down 30 percent annually, so many companies with data-intensive requirements can now reap significant ROI through the performance benefits of an in-memory computing platform. In light of this, Gartner has predicted that the in-memory technology market will grow to $10bn by the end of 2019, a 22 percent compound annual growth.

Boost Performance of Financial IOT Projects, a new white paper from GridGain Systems, can help you understand how you can boost the performance of financial IoT project with in-memory computing.

Under the hood of an In-Memory Computing Platform

To deliver the speed, scale, interoperability and security that IoT projects demand, in-memory computing must provide a variety of capabilities. The key features that IoT project designers should look for in an in-memory computing platform include:

a) An in-memory data grid which can be inserted between the application and data layers to cache disk-based data from RDBMS, NoSQL or Hadoop databases. Data caches are automatically replicated and partitioned across multiple nodes, and new nodes can easily be added to the cluster to achieve on-demand scalability. Some data grids offer support for strong ACID transaction compliance and ANSI SQL-99.

b) Distributed SQL functionality which supplements or even replaces a disk-based RDBMS. In-memory Distributed SQL typically utilizes ODBC and JDBC APIs and should not require extensive custom coding. The solution should be horizontally-scalable, fault-tolerant and ANSI SQL-99 compliant. It should support DDL and DML commands and offer support for geospatial data.

c) An in-memory compute grid which enables distributed parallel processing of resource-intensive compute tasks, typically offering adaptive load balancing, automatic fault tolerance, linear scalability and custom scheduling. The compute grid can also be built around a pluggable service provider interface (SPI) to offer a direct API for Fork-Join and MapReduce processing.

d) In-memory streaming and continuous event processing which establishes windows for processing and can run either one-time or continuous queries against these windows. The event workflow is typically customizable and is often used for real-time analytics. Data can be indexed as it is being streamed to make it possible to run extremely fast distributed SQL queries against the streaming data.

e) An in-memory service grid which delivers the necessary control over services deployed on the cluster nodes, while guaranteeing continuous availability of the services when a node fails. An in-memory service grid should be able to automatically deploy services on node startup, deploy multiple instances of a service, and terminate a deployed service at the same time that it enables the deployment of microservices.


The IoT age is just getting started, and the possibilities remain limitless, but one thing is certain. The IoT applications developed for financial services and related areas will require unprecedented levels of speed, scalability and flexibility. An in-memory computing platform is the most cost-effective way to achieve these goals.

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development