You don't have javascript enabled.

Are deep fakes a threat to the future of identity verification?

New research shows an alarming surge in the creation of deep fake videos, with the number posted online almost doubling in the last nine months according to the BBC. President Obama’s famous lip-synching video went viral and the most recent deep fake is of Boris Johnson and Jeremy Corbyn, who both appear to endorse each

  • Rebecca Angus
  • November 29, 2019
  • 5 minutes

New research shows an alarming surge in the creation of deep fake videos, with the number posted online almost doubling in the last nine months according to the BBC.

President Obama’s famous lip-synching video went viral and the most recent deep fake is of Boris Johnson and Jeremy Corbyn, who both appear to endorse each other to be the next elected UK prime minister.

While much of the concern of deep fakes is currently centred on their use for political purposes, celebrity and pornography, could this new era of technology threaten identity verification for businesses and consumers?

In this article, we will explore whether deep fake technology is a threat to your client onboarding processes, and how you can safeguard your company against identity fraud.

 

What are deep fakes?

A deep fake superimposes existing video footage of a face onto a source head and body using advanced neural network powered AI, and the technology is easily accessible to anyone. For example, a deep fake could appear to be a real person’s recorded face and voice, but the words they appear to be speaking were never really uttered by them, at least not in that particular order.

This technique poses an obvious threat to the political arena and to highly confidential government entities, but also any organisation that onboards new customers using remote biometric authentication.

Can deep fakes threaten biometric authentication?

Due to the threat of identity fraud, most of the leading remote Know Your Customer (KYC) onboarding players have embedded some form of liveness detection as part of the identity verification process.

“Liveness” detection is a vital feature for modern biometric-based recognition solutions. Checking for “liveness” verifies that the person attempting to confirm their identity is a living subject and not a copy or imitation. Liveness detection combines biometric facial recognition, identity verification and lip-sync authentication to reduce the chances of a spoofing attempt being successful.

As deep fake videos evolve and become more sophisticated, there is growing concern that deep fakes could override current liveness checks capabilities.

Can deep fake videos penetrate “liveness” checks?

Many liveness checks often ask their users to look in different directions or change their facial expressions, by frowning or smiling, to help reduce the risk of spoofing. 

One essential part of a liveness check is the voice authentication functionality, and all quality liveness checks will ask its users to say a set of randomised numbers, to a camera, out loud.

The reason why it is important to use a set of randomised numbers is so that any bad actors or automated fraud attempts cannot predict the numbers that will be displayed.

Currently, there are no known deep fake video systems available that can generate a synthetic response that looks like the user and says random words, or performs random movements correctly with exact audiovisual sync within the limited timeframe available. If it were possible to construct such a fake, it would be hugely labour intensive for each application, making large scale fraud impossible.

NorthRow is a regtech provider offers remote client onboarding solutions. Matt Law, NorthRow’s CTO explained: “The challenge for us is to build a product that is easy for applicants to use, yet technologically advanced enough to detect and prevent fraudulent use [Like deep fakes]. That is why NorthRow has partnered with AimBrain to launch the RemoteVerify solution.”

RemoteVerify is an identity verification solution that combines facial recognition, voice authentication, identity document verification, and liveness detection along with a full KYC compliance check.

As Alesis Novik chief technology officer and co-founder at Aimbrain, who specialise in biometric facial authentication, explains: ”A randomised challenge lip sync liveliness test checks both the video and audio channels, requiring the bad actors to generate in real-time artefact-free video and audio response to the challenge, which is not currently feasible. ”

Should we be worried about deep fakes in the future?

With accessible tools for creating deep fakes now available to anyone, it’s understandable that there is concern about the possibility of this technology being used for nefarious purposes. But that’s true of all technological innovation; there will always be some people that will find ways to use it to the detriment of others. Nonetheless, deep fake technology comes out of the same advancements as other machine learning tools that improve our lives in immeasurable ways, including in the detection of malware and malicious actors.

As it stands, there have been a small number of reported cases of fraud involving fake voice techniques to attempt to convince company employees to wire money to fraudulent accounts. However, there have been no known attempts to access accounts or override biometric identity verification solutions using deep fake videos. Currently, the technology powering deep fake videos is not sophisticated enough to override existing biometric recognition technology, but as computing power grows and algorithms get faster, deep fakes may become an increasing threat to identifying customer remotely.

That is why it is important that regulated firms invest in robust client onboarding processes that are agile enough to keep up with changing the business environment and new threats of identity fraud.