Bob’s guide to… risk management systems: Your implementation questions answered

By David Beach | 9 November 2017

This final article in our risk management systems guide will look at the struggle of implementing risk management solutions with expert insight as well as a case study provided by a risk vendor.


This ultimate article in the bob’s guide to risk management systems has sought to enlighten and clarify the risk management acquisition process. Our risk management survey revealed the expectations and hesitations of the 200 risk professionals surveyed.

The first article looked at the five questions to determine if a new system was needed, the second looked at making the right choice of risk management. It became apparent on the back of the survey that compliance, analytical capabilities and flexible integration were the key considerations that risk managers took to the negotiation table. This last article will look at the final step in the acquisition process - implementation.

Implementation appears to be the major issue affecting the tech purchasing decisions or CROs. 54.1% of risk managers highlighted implementation/integration as a key concern that would prevent from acquiring new risk management technology. 48.6% cited cost, whilst 33.8% indicated that the introduction of complexity were further factors that might prevent the acquisition of risk management solutions. In order to delve into the implementation process, we reached out to both sides of the aisle - vendor and impartial expert to give their takes on the realities of risk management implementation.

An expert opinion - Survey reaction and dos and don’ts

bobsguide spoke to Richard Magrann-Wells, Executive Vice President of Willis Towers Watson’s Financial Institutions Group, about the findings of the bobsguide risk management survey.

Are you surprised by any of the results?

Surprised that the leading risk concern amongst respondents was cost and feasibility of regulatory compliance? No, not surprised at all. In fact, our own analytics had revealed startlingly similar results. Uncertainty regarding the current regulatory environment is perhaps the most critical issue facing financial executives today.

What criteria have you seen firms choose their RMS upon?

The bobsguide survey addresses the concerns that would prevent management from acquiring new risk management technology. The overwhelming top responses were integration concerns and costs.  Regardless of how well a system responds to risk management issues in theory, management recognises that if the technology does not integrate smoothly into legacy systems it is less than useless and may create new problems. Respondents also note that high costs may well prevent acquisition of new technology.

Cost becomes increasingly relevant as risk management technology must keep pace with changing regulatory environment and consumer requirements. Either technology must prove itself adaptable or the cost must reflect its shorter useful life.

What key considerations should firms have when implementing risk management systems?

With cost and integration such major concerns, firms must be certain that their provider 1) understands their industry and regulatory environment and 2) has a reputation for helping firms implement and integrate the new technology throughout their firm.

What advice would you have for someone looking to go through the implementation process?

Find the right provider/partner. Build consensus amongst the impacted parties before starting, ensure that the new technology will accomplish what your firm needs at a price you are willing to spend.  Determine the goals and specifications of the new technology (including detailed integration plans and expected useful life) - make it realistic and budget accordingly. Finally check in with insurance providers to make sure that the new technology will be compliant with all cyber and business interruption coverages.

An implementation case study – the implementation process in close-up

bobsguide also spoke to risk management provider, Feedzai, who spoke candidly about the limitations and challenges of implementation and, conversely, the potential of successfully harnessing the power of machine learning.


Since 2015, Feedzai’s machine learning has been powering clients like First Data, the world’s largest payment processor. In terms of payments transaction volume, it is one of the largest, if not the longest-running, instance of machine learning for fraud detection we know of. Over $2.5 billion worth of commerce transactions flow through the system daily.

Compared to previous generations, such as neural networks from the 1980’s, the  machine learning system at First Data has benefitted from Moore’s Law. What used to take 18 months to deploy now takes 2-3 months. For large enterprises like First Data, machine learning operates on a massive scale and encompasses multiple data centres with hundreds of computer servers. Yet despite the scale and scope, it took Feedzai 3 months to deploy.

This is because, within the last 5-10 years, we’ve had the convergence of technology to make machine learning more accessible. Specifically, the growth Big Data (the fuel that powers machine learning systems), inexpensive data storage, faster processing chips, parallel computing and lastly, the availability of better machine learning algorithms with unusual names like Random Forest and XGBoost – these things created the perfect storm for modern machine learning.

Is the consultation period client led or vendor led?

Clients’ needs typically fall into 2 categories: 1) Standard or 2) Custom. With clients who have standard sets of needs, we find that they come to the table with well-defined requirements and have done formal RFIs and RFPs. Thus, we lead clients through a process to map requirements to existing solutions.

Clients have custom needs when they are engaging in something new, such as launching a new business line, entering a new geography, or experiencing new fraud attacks that have never been seen. Thus, the rules of engagement are unknown. In this case, machine learning that learns fast is needed. This involves building custom machine models. In the case of custom needs, we lead clients through a requirements definition process in order to define the new machine models.

What are the nuts and bolts of implementation?

Time to implement:

How long does it take to implement? The time required is a function of whether clients require on-premise or cloud access:

On-premise: Some clients run their own data centres and require on-premise installations. In the case of First Data, from the time hardware became available, we installed, configured and trained over 70 models within a period of 3 months. The system is capable of processing more transactions per second. Cloud: some clients prefer to access via APIs. In this case, implementation can occur in a matter of a 2-3 weeks depending on the client’s needs.           

People needed:

How many people are needed? What are roles and responsibilities?

Roles & responsibilities: 1x project manager (manages scope, calendar, client, overall organization), 1x tech lead (architecture, technical decisions), 2x software engineers (document, develop, test, deploy), 2x data scientists (create the machine learning models). Total FTE: 4-6.

Service level agreement (SLA): Are the performance SLA’s sufficient to meet business goals? A typical SLA is:

  1. Latency: under 25ms for 99.999% of requests
  2. Uptime: 99.95% of the minutes of the calendar month
  3. Average Sliding Latency: for any 3 minute interval under 50ms while scoring 99.999% of transactions

How disruptive is implementation?

The actual implementation itself, relative to other activities, is only a small portion of the deployment process. While risk teams are the main users within the client’s organization, teams both upstream and downstream are involved:

Model governance: Due to the regulated nature of financial services, decisions must adhere to fair credit and lending acts. Thus, machine learning systems like the one deployed at First Data must meet audit and compliance teams’ requirements for adherence to consumer and commercial policies.

Logistics: Clients like First Data handle real-time payments transactions which subsequently often trigger shipping activities. Thus, warehousing teams are dependent on the machine learning decisions.

Customer service: External-facing teams need to be able to explain to customers about the decisions made by machines. Thus, customer service agents need clear, human-readable explanations to relay to customers.

What difficulties have you found with implementation?

Data hygiene: Data is the fuel that powers machine learning systems. We find that 4 out of 5 of clients have significant data quality issues. Because machine learning subjects the data to greater scrutiny than ever before, clients often overestimate the quality of their data and this can result in adding additional “pre-implementation” steps to cleanse the data.

Business process bottlenecks: Clients oftentimes underestimate the impact of better, faster decisioning. For instance, today, model retraining can occur in a matter of days. Before, it used to take 6 months. Thus, the communication of new model parameters, ones that impact policies and therefore require customer notifications, need to occur more frequently. If supporting business processes are still designed for the “6 month cycles” then that becomes a bottleneck for implementing modern machine learning.

How did you solve these practical difficulties?

A few best practices that we’ve found especially helpful when implementing machine in a high-volume, real-time environment:

Solve for transparency: Previous generations of machine learning were blackbox. Look for “whitebox” approaches that demystify the machine logic and ones that can provide clear, human-readable output. This puts people in command and control. 

Find data and domain expertise: The implementation process inherently involved wrangling data, and not all data in all domains are the the same. Find experts in both data and the domain in which the machine learning solution is being implemented. In the case of First Data, the teams working on the implementation where experts in both data science plus fraud prevention domain, i.e. “fraud science” experts.

Choose machine-learning era underlying technology: Legacy platforms that bolt on machine learning capabilities suffer from bottlenecks. The system can only perform as well as the weakest link. Legacy systems often have underlying pre-Internet era technology from the 1980’s and 90’s. Many of the implementation challenges, such as speed of processing, data access, transparent output or business process challenges can be solved by first choosing “machine-learning-era” technology. For example, Hadoop and Cassandra databases we’re borne from Google-era needs to have fast, real-time access to massive data storage.