The customer-centric trend continues to grip retail banking, as any number of reports demonstrate the customer demand for quicker, better and more convenient banking. From the convergence of improved technology acting upon greater volumes of data, biometrics was born.
Iris scanning, facial and fingerprint recognition and many others seek to provide a non spoofable authentication process. Around Christmas NatWest hopped on Apple technology – or Face ID – to allow users to authenticate their identity with facial recognition on the new iPhone X. Since then, Mastercard have promised that all its cards will feature an added layer of biometric authentication using smart devices by April 2019.
As banking moves further into the digital realm, fraudsters have equally learned the new techniques and are making use of the opacity that hiding behind a screen provides. As a modus operandi, fraudsters look for the weakest chinks and typically that comes from interactions at the human level and makes the thousands of call centres easy pickings.
Trouble is, and cynics will agree, in the fintech game fraudsters are the most agile and adaptive of all and surely their criminal resourcefulness won’t stop with biometrics.
We took these questions to voice biometrics expert, Brian Martin, Director of UK and Ireland for Spitch. The Swiss based voice biometrics company, specialising in natural language processing technology, seeks to strengthen defences and reduce the fraud coming through contact centres.
How good is the technology behind voice biometrics?
The current technology is able to identify 80 distinct characteristics within the vocal tract which can be measured on a scale of 1-10, so you can improve your weakness by 80 to the power of 10. This makes the voice print more unique than any finger print.
When fraud departments begin looking at voice biometrics as a viable technology, what specifically is it looking to cut out along the fraud chain?
Firstly, voice biometrics can help with sentiment analysis and measure the indication of positive or negative customer behaviour which is then used to score conversations by effectiveness in real time. The three emotional features of speech – nerves, power and arousal – and combination of which, can be analysed real time, allowing us to identify vocal traits that can be translated into usable indications of willingness to buy, for instance.
Of course, this is great for behavioural analytics but also serves to build up a database of fraudsters who are often repeat offenders. This means that as soon as the agent begins talking to a known fraudster, the system will flag it up in real time.
That's almost a phonebook of fraudsters. How might the banks help each other out?
Cross collaboration is critical. If banks chose to share those voice prints, they could reduce instances of fraud across the banking sector. Fraudsters would have nowhere to go if they were barred successfully from all banks.
What is the business case of voice biometrics and how might a bank use it as a cost effective method?
Some of the best results we've seen in terms of resource savings have been where contact centres are requesting relatively simple transactions that can be identified and passed on to the automated system, by as much as 80%. Those calls that do hit the contact centre can be reduced in length by up to 20% through identifying the customer and cutting out the verification and admin processes prior to speaking to the human operator.
Would there also be a use case in regulation?
Voice biometrics can become a key part of the regulator's toolkit. MiFID II requires that telephone conversations have to be recorded and accessible. The technology would be able to pinpoint key phrases and words as they're spoken and flag and timestamp them. Similarly, it would give a definite answer as to whether required disclaimers were or were not given.
Looking to the future, where will this technology go?
Voice biometrics will become a further embedded part of technology, with the likes of Google, Siri and Alexa. They all use natural language processing voice technology that is being used in quite a wide scale. On the enterprise level, we're looking to make further use of machine learning will continue to improve our algorithms in the background to drive up the speed and accuracy of our platform.