While financial institutions are still in the early stages of adopting AI technologies, intelligent machines are expected to become the most defining tech for the future of institutions in the financial services and insurance industries. In fact, estimates suggest that 75% of insurance executives believe that AI will either significantly alter or completely transform the insurance industry by 2020. Moreover, one-third believe that their own company will be completely transformed by AI within that timeframe.
AI is expected to redefine the way financial institutions gain information from and interact with their customers, with the benefits of embedding AI into user interfaces being better data analysis and insight. A range of institutions are already either experimenting or have implemented AI/ML capabilities into various processes, with benefits encompassing improvement of straight-through reconciliation (STR) of incoming payments, higher conversions from service recommendation engine, better understanding of customer behavior and preferences, large-scale automation, etc.
Among the financial institutions, Goldman Sachs is one of the most vivid examples of an institutions embracing the potential of AI. Goldman Sachs brings significant automation into areas of trading like currencies and futures using complex trading algorithms, some with machine learning capabilities. MIT Review reports that the number of US cash equities trading desk at Goldman Sachs’s New York headquarters employees went from 600 in 2000 to just two equity traders and automated trading programs supported by 200 computer engineers doing the rest of work.
HDFC, ICICI, BofA, Charles Schwab and JP Morgan are also among the institutions applying AI/ML across a variety of use cases.
The insurance industry has its own examples. Allianz, MetLife, Transamerica, QBE Insurance Group, XL Catlin, and Aetna have explored various applications of AI, and some of them have reported important results.
MetLife, for example, has reported that Shift Technology, an AI startup based in France, helped a European coalition of insurers to analyze 13 million claims. The technology identified 3,000 new cases of potential fraud, including a large, organized crime scheme that impacted nearly all the coalition’s members. The scam had siphoned millions of Euros from the group’s insurance company members over the span of many years, according to a Shift Technology case study.
There is no shortage in examples of how intelligent machines are used to address critical areas of the financial services and insurance industries, but all of them rest on a single defining foundation – data.
Structured data is the foundation of successful AI adoption
While adoption of machine learning and artificial intelligence is critical for success in the era of rapid digital transformation, it’s even more important how organizations structure data to make it usable for driving insights.
Financial services and insurance industries are driven by data. The way data is leveraged has an unimaginable impact on the bottom line and customer satisfaction. However, despite expected benefits and the abundance of available technological advancements applicable to various elements of the value chain and operations in the financial services and insurance industries, institutions are yet to fully harness the potential of AI. The main reason for that being the complexities of organizing data that feeds intelligent machines.
As Jon Theuerkauf, former managing director and group head of performance excellence at BNY Mellon, said: “Forget AI, I don’t even know what it means. Why are we jumping on it, if we haven’t done the basics?”, referring to structured data being the key to AI.
“We are now in a transitional phase, and are still three to five years away from integrating operating automated environment. For example, it takes a long time to train Watson. Why? Because the data does not land itself easily to allow Watson to learn. So, there needs to be an order around that data, and we are now starting to put things together and taking the chaos out of it.”
80% of modern data is unstructured, representing a security risk and inhibiting the adoption of advanced technologies
“Like the physical universe, the digital universe is large – by 2020 containing nearly as many digital bits as there are stars in the universe. It is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes.” – IDC
The vast majority of data representing the digital universe, however, is and will remain unstructured. And although unstructured documents are widely used as key inputs and “systems” for core business activities, there are significant challenges organizations face when it comes to unstructured data, which include managing and extracting value from the influx of unstructured data, processing these huge volumes of data as quickly as possible, and finding new and innovative technologies within their industry.
Moreover, unstructured data is seen as a vulnerability in the face of cyber-threats. Because organizations struggle to understand where that critical unstructured data is, how it is used and who has access to it, it can represent a bigger risk to the enterprise, according to IBM. The level of risk varied significantly depending on the case, but one thing is common for unstructured data – the difficulty to apply a standard, organization-wide measure to protect that data, and the application of advanced technologies to drive value from the range of sources generating unstructured data.
Here is how 25% of global systemically important financial institutions (GSIFIs) in the US are maintaining their competitive advantage
The financial services and insurance industries are highly dependent on data-driven predictive analytics. Structured, organized data is critical for accurate and dynamic adjustment of financial products and services to continuously changing consumer habits and behaviors, as well as changing market conditions.
Today, structured data is the competitive advantage for 40% of the GSIFIs in the US (by AUM) who use a single solution – Pendo Machine Learning Platform (PMLP) by Pendo Systems. Pendo System’s machine learning platform transforms unstructured data into AI-ready datasets at machine scale allowing businesses to explore, discover and analyze unstructured data accumulated across wide variety of sources. Applying real-world, customer training data, the Pendo Machine Learning Platform improves the accuracy of standard NLP libraries to over 95%.
Pendo has recently released version 4.0 of Pendo Machine Learning Platform (PMLP) that incorporates a number of new capabilities.
The new release has a vastly improved toolset that significantly accelerates the time to implementation, as well as offering the ability to tackle more complex Machine Learning processing challenges. New features of the version 4.0 allows to engage SMEs to create training data with the Pendo UI and then train models against it. This enables companies to put the solution in the hands of the business-users, not just their IT groups.
The version 4.0 of Pendo Machine Learning Platform (PMLP) radically improves UI for managing complex classification and processing of documents. It also offers new connectivity options with Content Management Interoperability Services (CMIS) support and web crawling included. Version 4.0 also brings new plugins that integrate seamlessly to provide access to a range of machine learning algorithms.