bobsguide caught up with Gurjeet Singh, Co-founder and Executive Chairman of Ayasdi, to discuss how AI will go in 2018.
Ayasdi offers an enterprise-grade artificial intelligence platform that leverages big data to make intelligent business applications; for instance, Ayasdi has an application that powers parts of HSBC’s anti-money laundering technology stack. Headquartered in California, Ayasdi has further offices in London with global expansion demanding a third office potentially coming to Singapore for 2018.
What’s happening with AI in 2018?
Lots of stuff is going on with AI. Broadly speaking there are two major ways of thinking about problems addressable by AI today. One side is around perception based problems – self driving cars and virtual assistants – these rely on data such as imaging and sensing the environment.
Perception based problems present a different challenge to AI in that the data tends to be “over-complete”. For instance, if you took a picture of this room with a high quality camera and a low quality camera, you’d still be able to make general remarks about the dimensions, colours, people etc. regardless of quality. Data used for perception based problems have redundant information.
Perception based AI solutions have really come a long way, in part thanks to consumer technology companies such as Google; mainly because these are the problems that everyone faces – problems that humans evolved to solve.
The second area in AI is non-perception based. Imagine if you collected genetic samples from 10,000 people; one genome from the genetic sample would have three billion base pairs. This is the type of data is non-perception based and not redundant as it has very small details. As an example, a single switch of the DNA alphabet can cause genetic diseases. This is a problem where there is no redundancy and you need to learn from the entirety of large and complex datasets. These problems are very difficult for human beings since we did not naturally evolve to solve such high-dimensional and non-redundant problems.
What type of data is predominantly used in financial services?
Most financial services’ datasets - AML, KYC, payment, risk - fall into this non-perception based category. Using AI to make sense out of that data is the challenge.
Most AI developments in perception and non-perception has been around systems that are predictive. All of a sudden, enterprises are realising is that the vast majority of data is unlabelled. What is unlabelled? Imagine you have payments going through the transaction system, most go through without a hiccup or investigation.
With the vast majority of data you don’t actually know if it's good or bad because you don’t check or it isn't flagged. Enterprises have so much of this unlabelled data that utilising it for predictive AI is a challenge. Learning from unlabelled data is called unsupervised machine learning. Improving unsupervised machine learning will be the emphasis for 2018 as you simply cannot solve whole categories of data problems with predictive AI.
In a financial services setting, unsupervised machine learning would make that initial discovery of customer risk groups, for instance.
A second major development in AI is reinforcement learning. Google’s AlphaGo was developed to beat (and did) professional human GO (board game) players. AlphaGo relied heavily on reinforcement learning which is discovery, learning and relearning strategies. Today, they’re used in gaming and other simplistic settings but they can be used in enterprise settings; Google famously used its reinforcement learning to control its data centre usage to reduce temperature spikes.
How far are we from a ‘bot with will’?
Even though there have been great scientific achievements, we are still very far from systems that can have their own aims and agency. Even if we wanted to programme ‘agency’ into bots, the term itself is nebulous and means all manner of things. If you asked coders to insert agency, they’d ask you if you were out of your mind. We don’t know what it would take to insert agency.
Another development of 2017 has been explainable AI. This is the idea that if you have an AI system doing a good job on complex data, you want to be able to audit to find out how it’s doing such a good job and the things it has learned, are worth learning ourselves.
In the very early research around neural networks, there was a car that was developed to take a picture of a road and learn how to drive on it; it was a very early attempt at self-drive. It actually worked quite well when an engineer developed it over a summer. When winter came, the car seemingly stopped working.
The researchers didn’t understand why it wouldn’t work. Back then, with low resolution imaging, it is now apparent that the car couldn’t distinguish shade very well - it couldn’t determine black from blue. Shade has little importance when there is bright summer sun, but in the darker, cloudier winter months, then instantly, the car couldn’t distinguish where the sky was and couldn’t orient itself.
This is why we need explainable AI.
Is there any aspect of AI in particular you’re worried about? Rogue robots?
Rogue robots are so far outside of the realm of possibility… I build robots and, believe me, it is so far away!
What I’m actively worried about is the AI that optimises consumer attention at Facebook and Google. A university in the US ran a study where they invited three people - someone who was left leaning, right leaning and moderate - to go on their personal computers and search the term ‘Egypt’; it was shocking how different the results were.
The right leaning guy saw articles about Muslim brotherhood, the moderate saw the pyramids and tourism, and the left leaning person saw stuff about Egypt’s solar power initiatives. What you should take away from this is the idea of an echo chamber i.e. how can you possibly develop your world and political view if you’re never exposed to a contradictory views, thanks to Google’s suggestive algorithms? When big corporations like Facebook and Google see ‘optimising for attention’ as business incentives, that’s when you start to worry about reinforced culture.
Will AI ever be able to accurately predict the market volatility we’re seeing in the cryptorush?
AI systems are quite predictive overall. The efficacy of any AI system is based on one single factor, how much orthogonal or unrelated data do you have? This means a bank can make better credit scoring by asking for unrelated information - imagine marrying personal details as well as social media data as an example.
With cryptocurrency, I’ve been a crypto fanatic ever since it came out, to be part of the ecosystem, not speculate. It’s like triple entry accounting: you, me and a third party all have to prove our identity and agree on the transaction - it’s not falsifiable. That was the promise, that we can accomplish triple entry accounting without a centralised authority.
However, since January I’ve given it up because I’ve realised cryptocurrencies won’t fulfill any of their promises of decentralisation because you will still have to rely on a coindesk or miners in China. You’d also need an element of trust for it to work, and that’s exactly what decentralisation is meant to do away with.
A lot of the hype around blockchain is also a waste of time as most of the practical use cases require trust. I saw that IBM and Maersk have partnered on a blockchain, customers will have to put their trust in IBM and Maersk - so it’s just a secure database - that’s not even before we get into energy consumption and wasteful proof of work.
There’s no nice way of saying it, total decentralisation cannot happen.
Crypto trying to replace SWIFT is another one. It’s a secure and much faster database but I have to trust crypto to keep the database since it is a private blockchain, in the same way I trust SWIFT. At least, I trust the legal guarantee behind SWIFT.