This article on 2017’s AI comes very recently after Google’s AutoML project created an AI child that was smarter than AI built by humans. The ‘child AI’ called NASANet was created by two parent AIs and utilises ‘reinforcement learning’ that enables it to report, learn and improve from its parent AIs.
Whilst we’d scheduled an AI recap for 2017 it seems that this has been the most significant development in AI technology this year. We’ve put together the interesting world of AI as told by our articles over the months of 2017. We’ll look at the many applications of AI, what AI is and where it’s going.
Back to basics: What is AI?
AI is the science of creating intelligent machines and computer programs that can replace human tasks. Computer programs have vast capabilities to execute tasks quicker than humans can, with embedded algorithms that leave less chance to make human error.
Robert Smith, Chairman and CEO of Vista Equity partners said at The World Economic Forum: “Since the invention of computers, we have envisioned that computer systems will take the best of what we think and deliver real time solutions that are more efficient. The desire to leverage really hasn’t changed since we were first sparked with that vision. What has changed is that we have scientists, technologies and leaders who have developed new computing platforms such as AI that make this type of computing a reality.”
Tom Blomfield, CEO of Monzo says: “Broadly speaking, the way Monzo is defining and using AI effectively is through supervised learning. We have a big data set which we analyse to determine outliers. A good example of this is detecting fraud. We take a look at the vast data from historic transactions and can identify that a small portion of those is fraudulent. We feed that data into a machine learning model to build a set of rules or an algorithm that will, in future, detect which transactions are fraudulent.
“Our machine learning engine will then very quickly identify fraud and fraudulent transactions and suspend these accounts.”
How do you identify AI?
Whilst blockchain and bitcoin have received the most hype, AI has had its fair share too. We spoke to Gurjeet Singh, CEO of Ayasdi, who told us how any old company claims to be ‘AI enabled’ when in reality they simply have ‘predictive analytics’. Gurjeet holds a PhD from Stanford in Computational Mathematics and applies AI to HSBC US’ fraud prevention function.
He explained the five characteristics that AI must demonstrate to be justifiably classified as true Artificial Intelligence technology.
1) Discovery; the ability for AI to find information from large, complex datasets without upfront human intervention. In technical terms, this is called unsupervised or semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection, etc.).
2) AI must be able to predict. But there’s plenty on that out there so I won’t go into it because it’s probably well understood.
3) Justify. In the next five to ten years that the vast majority of enterprise systems will be heavily augmented if not outright automated. In our journey to that future, we need machines to justify outcomes to the human operators. So machines need to be able to justify every suggestion and prediction, every segment and anomaly to the human operator. So being able to justify is critical to build trust.
4) Act. The ability to put these AI systems into practice and make them ‘live’ to carry out the discover, predict and justify function effectively. In a lot of large enterprises, hardly any of the AI application experiments make it to production, because it can’t pass that test in the real world.
5) Learn. AI needs to learn as the data evolves as the underlying distributions in the data change. It’s important for the system to be able to monitor it. So, the ability to self-monitor and learn from it and say, this data has changed, I recommend you update your system in these following ways.
Bilal Hijawi, Senior Content Developer at EastNets, talked to bobsguide in September about AI’s potential to change the risk landscape.
“Applying AI in risk management solutions offer users a leap in operational efficiencies, with clear advantages in delivering higher accuracy in investigations and detections and reducing false positives, while also speeding up the overall screening processes. When powered by AI engines, these solutions can detect unknown anomalies that the rules-and-risk based approaches developed by compliance officers cannot.
In most cases, these upgraded systems allow better scalability. The rewards are plenty, ultimately rewarding AIs with better overall risk mitigation and customer service. Post on-boarding, FIs now apply due diligence systems that monitor transactions and report findings based on preset static rules and scenarios. With the growing volume and complexity of today’s business transactions, these legacy systems are less inefficient, producing a high volume of false positives, and causing downtime in operations; thus adding to operational costs.
In AI-powered banking environments, the approach to financial due diligence is built on intelligent machine learning. These systems analyses massive transaction datasets to discover mostly hidden patterns and relationships that characterise group transactional behaviours. Based on these patterns, present and future transactions can be evaluated and assigned different risk scores, or flagged for manual evaluation in real time.
What results from applying semi-autonomous risk management systems, are much fewer false positive alerts, reducing workloads and rewarding FIs with sustained operational savings.The applications of AI are many and diverse, and they’re fast evolving. As an emerging trend, AI covers multiple sources of data, offering natural language processing (NLP), data mining, text analysis, and machine learning, semantics, and more.
In essence, AI can accomplish whatever functions people do. But efficient AI systems cannot survive without intelligent human input and design, based on data availability and quality. Human interventions when systems are properly set up and operational are only needed at long intervals. These systems are built to learn from pervious actions, and apply fuzzy logic to weigh in on the probability of one event against another.
While many small and medium sized banks are still hesitant at adopting this intelligent technology, many anti financial crime and compliance risk management solution providers are upgrading their solutions anyway, forcing an ubiquitous adoption of AI in global markets.
Dermot Harriss, VP of Regulatory Solutions at OneMarketData, talked us through the capabilities of AI in trading.
There’s a lot of hand waving about it in trade surveillance. It’s a very interesting field and we do have a machine learning team at OneMarketData, but the most important aspects of the trade surveillance system don’t really need machine learning.
And that’s because it’s a reasonably simple alert to detect the patterns of manipulative practices that regulators are looking at and building evidence on. That alert doesn’t really require advanced statistical techniques.
There are areas where advanced statistical techniques like machine learning that can help. One of those areas we’re focused on is trader profiling that recognises trade deviations. This type of ‘unsupervised learning’ analyses the data, comes up with the metrics of what a normal profile resembles and, from there, figures out what abnormal is.
Machine learning can also help by learning from all the decisions made by the human user and gradually adjusting alert parameters to minimise false positives. We’re beginning to look at that approach, but it needs a large dataset and we don’t always have the data because sometimes our customers keep the data. So you can’t have effective machine learning without data management too.
As for the future, with the amount of oversight and coordination between regulators and the requirement for all market participants to record market activities and the gradual extensions of surveillance to OTT products, trade surveillance is going to be a busy area for some time to come.
Gurjeet Singh outlines how his company, Ayasdi, are using AI and Topological Data Analysis to prevent fraud in the US.
“HSBC is transforming their approach to financial crime risk and that involves looking at different technologies and processes. Their goal is to amplify their investigative teams in such a way that they catch every bad actor they possibly can. Given that the false positive rate across the industry is well north of 90%, a key lever point for increasing the efficiency and efficacy of these operations is to reduce false positives without changing the risk envelope for the bank.
The reason why they have these false alerts is because they have a scenario based approach. To illustrate that scenario based approach, an 80 year-old in Greece making multi-thousand dollar transactions each week is unusual (this is a made-up example). Banks come up with these rules, put them through the transaction monitoring system, and the system flags up whenever the rules are breached. They use this approach partly because the financial regulators ask them to do this. So the regulators will come up with the scenarios that you both deem to be risky, and screen your transactions against those scenarios to see if you are complying.
The problem is that these scenarios are very coarse because the data is immense, and these scenarios essentially ignore almost all of it.
The way HSBC is using our software is to discover segments of customers or pseudo customers, before any transactional monitoring has happened. They use all the data that they have to discover these segments of customers and find-tune the scenarios for each segments. This basically means that they dramatically reduce their false positive rate, because the scenarios are tuned per segment and the segments are discovered in an unsupervised way.
HSBC noticed a reduction of false positives of 20% while they were also able to capture every suspicious activity reported that was filed in the last few years.”