Thinking beyond tick-box compliance
According to Gartner, the average cost of IT downtime is around $5,600 per minute. That is about $300,000 per hour on average, and this doesn’t include regulatory penalties and reputational damage. Hence, avoiding downtime in production is essential, and load testing will help ensure that your core banking application (referred to as system) is ready for production.
Specific periods like payday, end of the financial year, and festive seasons might cause a spike in traffic on banks’ core banking systems. Hence, thorough performance testing should be done so that customers don’t get affected by performance failures. A significant example from the past where bank customers got personally affected due to performance failures is post demonetisation in India. Consumers had their debit card transactions declined due to a massive surge in digital transaction volumes.
Marking the objectives
The main objective of performance testing is to validate that the system behaviour and response times under peak load is consistent and measurable. It is also to ensure that the system does not exhibit any degradation in performance when exposed to average load for an extended period. Any performance bottlenecks during testing are identified and documented. The tests also help ensure that the load is uniformly distributed across the various architectural tiers of the product. Conclusively, the tests determine the number of concurrent users that a core banking application can support and improve scalability to allow more users access.
Performance testing scope
Any performance testing should cover all types of activities typical of a core banking implementation and production lifecycle. These include online transaction processing (from both system user interface and channels), intraday uploads, end of cycle batches and legacy data migration during the cutover window.
A two-step strategy
The following two methodologies could be planned to test the system capacity:
- Performance testing using automated load testing tools
- Business simulation preceded by migration cutover activities
Methodology 1: Automated load testing
The following section illustrates the typical phases of the automated load tests for performance testing.
A detailed requirement study is undertaken to understand the scope, application functionality, collecting performance requirements from a business perspective, analysing business volumetric and historical data, and finalising performance testing goals, objectives and acceptable results. Also, the external dependency would be documented, and an appropriate action plan and RACI matrix would be drawn up.
Migrated data would be uploaded onto the performance test platform, and initial sanity checks of the application would be done. For the various business scenarios common to the user interface, channel and batches-test scripts would be prepared. These channel and batches-test scripts would then be used to simulate load into the application tiers. Also, if there are any agreed historical data to be built, that would be pumped in. During this phase, monitoring tools for gathering system performance metrics for the testing window are configured. The outcome of this phase would be a frozen performance testbed with the following:
- Frozen application setup with an appropriate parameterisation for performance testing
- Load simulation scripts for an agreed count of application users for each module
c) Execute and diagnose
Iterative test runs or performance tests would be conducted to validate environmental configurations and application performance behaviour. There could be a minimum of three possible iterative stages that can be configured, and details are provided below. The outcome of these stages would be an optimised environment for the final measurement run. Between each iterative stage, a performance testbed would be restored, and a re-run would be conducted.
The following are the stages for each performance testing round:
The individual function would be executed with its peak load, and system behaviour would be recorded. If there any shortcomings noted in environmental or application configurations, the same would be communicated to respective stakeholders to refine it to obtain maximum throughput.
The integrated business scenario would be simulated, and system behaviour would be recorded. Like in stage 1, if there any shortcomings noted in environmental or application configurations, the same would be communicated to respective stakeholders to refine it to obtain maximum throughput.
A final round of simulations would be conducted to validate that all refinements / fine-tuning / fixes promoted from earlier test rounds to this phase are in place. Load test on peak hour volumes, endurance run and stress tests would be executed to test the core banking system’s resilience.
In this phase, the final measurement run(s) would be conducted, and all system metrics captured for measurement purpose. All metrics would be validated during post-run(s) and used to prepare the final performance test report.
Methodology 2: Business simulation
This is an essential aspect of performance testing as it provides for a final confirmation of readiness of the core banking system, human resources and processes. A few weeks before the actual live cutover, multiple dry runs of the cutover activities are carried out to simulate the actual sequence of activities. This includes the execution of mock data migration (from legacy systems) on a frozen set of to be production system parameters. Upon completing the mock data migration, the system is handed over to the end-users who execute a day’s transactions from a chosen earlier business day. All channels are also opened, and transactions are recreated into the system from their respective simulators. Intra-day file uploads are also initiated. System health is closely monitored using various performance testing tools and reports.
These simulations ensure that the system’s performance is smooth when all users are logged into the system. It is also to prevent lacuna in system functionality and understand how well the system is configured in the application and environmental software parameters. Last but not least, it helps gauge how well the end users are trained on the new system.
As part of the simulation exercise, end of day batches would be executed to measure batch timelines and identify performance bottlenecks. Multiple such dry runs could be planned to achieve betterment and stability.
Over the last two decades, Oracle has worked with many leading global retail banks to help them navigate their complex transformational journey – core banking product upgrades, infrastructure upgrades, or mergers and acquisitions. They have leveraged Oracle consulting’s innovative and structured approach to load testing using automation tools and data load models.
A recent use case involved a mega-merger of two large West African banks, resulting in a mammoth customer base. The merged bank partnered with Oracle in a comprehensive performance test engagement on their revamped Exadata platform, resulting in a hugely improved TPS throughput and a vastly enhanced end-of-cycle timeline. This included multiple iterations of load testing at Oracle labs and customer environments, followed by two rounds of business simulations.
Oracle’s domain knowledge, business know-how, technological skills and extensive experience makes it ideally positioned to help banks achieve high performance in the years to come.
To learn more, feel free to message us to have a conversation or explore Oracle’s core banking solutions here.