Alistair Rennie joined IBM in 2007 in the firm’s Toronto Software Laboratory and has held a number of senior and executive roles. Previously general manager for IBM’s Industry Analytics Unit, he is responsible for leading the creation and delivery of cognitive solutions for the financial services industry.
Bobsguide caught up with Rennie to discuss Watson’s applications in artificial intelligence (AI), the future of the industry and how collaborative understanding and experimentation will be crucial to development going forward.
What has taken both you and IBM Watson Financial Services to where you are now?
I've always been involved on the software and software solution side of IBM, and I've run various parts of IBM software businesses over the years. We had done significant amounts of work around how we took analytics, and applied them to business problems. We had done it in all sorts of interesting areas: business planning, dashboarding, and expanding the ability of analytics into business users.
We took a step back, and the challenge that we took on at the time was to consider the fundamentally impactful thing we could do in terms of creating solutions for financial services. We went through a fairly exhaustive process to figure that out. We spoke to clients around the world, looked into research, and analyzed previous work we had done with established customers.
The conclusion we can to as a team working across the software group, the services group and other parts of IBM was that regulatory challenges and regulatory compliance challenges was a substantial problem. This was in part due to the levels of effectiveness that they were going through, as well as the runaway expense. Both factors made it an area where innovation was really, really needed. We felt it was a really significant market opportunity, it’s a $30bn market and growing pretty significantly.
We knew we had some unique capabilities, so we took that work and around it formed Watson Financial Services, with a core focus around financial crimes and operational risk, leveraging a whole bunch of technologies. Along the way IBM acquired Promontory Financial Group, so we had an infusion of expertise on the regulatory side into IBM, which was really critical as we look at how we use some of these technologies in a highly regulated space.
How have you seen the adoption and acceptance of AI solutions within financial services change?
The reception has been very strong. It's pretty clear that AI is a foundational technology for changing the way processes are organized and run inside financial institutions. It’s important to understand that what we’re after isn’t a series of data experiments where you find some interesting results and study them. We’re trying to make a deep impact on some of the core processes our clients depend on, whether that be operational risk or monitoring for financial crime.
We have seen broad receptivity from clients all around the world that AI is a foundational ingredient in that fundamental process change, driving both effectiveness and efficiency. We’ve done meaningful work with clients on how to take the premise of AI and put it into practice. It’s important to recognize that many of these firms are regulated international entities, so this has to be done in a way that is transparent and understandable for regulators.
We’ve been able to couple the work we’ve done with helping the underlying data architecture to support things in a sustainable way, with how to build models that are explainable and testable, and how to take the insights of these models have them not just be interesting outcomes but become fundamental drivers for their processes. The desire and need for this technology is extremely high, and the journey we’re on is not just one learning to do AI models, but how to put the AI into a testable context of business processes. I think that’s only going to accelerate.
How can firms select the right solution and be sure they’re not opting for a firm slapping an “AI” label on its existing systems?
This is a really important question and it’s a question which we have always encouraged clients to think about in the context of the broader applications of analytics. AI is clearly a game-changing tool and capability but for it to be effective the starting point has to be looking at what the business outcome you’re trying to achieve is. Doing things for their own sake will always be unfulfilling. When we work with clients in financial crime their purpose is to replace legacy systems that are driving significant amounts of false positive alerts, costing a great deal of money, and not really making the global financial system more secure. That’s a pretty big mission there, and to do that you need to look at the end-to-end process.
In the AI space the tools have become much more accessible. The underlying analytics, whether it be voice or natural language processing (NLP), or various machine learning algorithms have gotten much. But there's no magic in just getting them deployed. Helping clients with what is an underlying data architecture look like is crucial, there's no AI without IA, meaning there's no artificial intelligence with information architecture. Getting that ground work is important. AI still requires data preparation and an understanding of the lineage and availability of good data. Then there’s the fact that AI will give you good insights but then that has to be threaded into the workflow, whether they’re making a decision on a transaction or validating who a client is. That’s what we help with – we establish a process, not just an insight, which gives you an actionable outcome.
AI is a superpower but needs to be put into a full context. In many financial services spaces I think this is going to become an increasing issue. You have to have that extra step to be able to show your regulator that you understand how the AI’s tools are coming to its insights. There’s a final mile of making sure that these solutions as they get put in place are suitable and trustable in a regulated environment.
How are regulators reacting to AI, and where do you see them taking action in future?
Our experience globally has been that the regulatory agencies are seeking to us in the exact same way the financial institutions are, they’re looking to find better tools whether it be money laundering or conduct. I think the, the desire to leverage these tools is rooted in maintaining trust and transparency in the financial system. With the number of exceptions, we've seen globally it's clear that the sort of currently deployed toolset is falling short.
That’s leading regulators to be quite open minded and quite innovative about partnering with the industry on understanding and in many cases encouraging experimentation and understanding of new tools, and that having a real promise of increasing the effectiveness and safety of the system. We’ve seen regulators in Europe and the UK set up sandboxes and working groups to begin to understand the tools. I think these have been very constructive.
In the US at the end of 2018 a number of agencies including the Office of the Comptroller of the Currency (OCC), Financial Crimes Enforcement Network (FinCEN) and Financial Industry Regulatory Authority (Finra) and a couple of others have come out encouraging the financial services industry to start establishing experiments. I think that’s really encouraging, and I have been by the relative speed at which regulatory bodies are starting to embrace things. They’re taking a wise approach in terms of learning. There’s a lot of reasons for them to be agile and iterative and transparent.
What goals do you hope to have achieved at Watson Financial Services in the next few years? And where's the industry headed?
Our overwhelming and ambitious goal is to have a fundamental impact on the trust and transparency in the global financial system. You can look at a number of instances in the past where there has been a risk – whether it be money laundering issues, conduct issues or where trust in the financial institutions has been hurt with clients or consumers.
I expect we will continue to see rapid adoption of experimentation and the beginnings of implementation of these systems, where people are really starting to turn things from theory into practice. I wouldn’t want to venture a timeframe, but I think it will be a reasonably short period before people start to decommission legacy systems that run on very old rules-based technology and start to replace them with modern systems based on compelling analytics that have a more effective view of understanding risk. We will see that turnover occur at an increasing rate in the future.