AI vs. Face-to-Face Verification: Debunking Myths and Identifying the Most Secure Method

Share

After a long period of uncertainty and technological developments, AI is quickly making its way into customer journeys and internal work processes across the board. The newfound accuracy and efficiency of algorithms translate into thousands of use cases, ranging from content generation to autonomous driving and fraud prevention. In terms of the latter, it‘s helping trust service providers detect fraudulent onboarding attempts – some of which are invisible to the human eye. But according to our Chief Product Officer Tomas Zuoza, for now, the benefits and challenges of integrating AI in ID verification come in equal parts.

 

How does AI-based ID verification work?

When it comes to remote onboarding, service providers have 3 options: to set up a fully manual check (via a conference call), to have an AI-based process with a manual element to it, or to rely entirely on AI. Each model comes with its own set of pros and cons, but at the moment the majority of service providers around the world favor the hybrid approach. Essentially, that means putting AI at the forefront of data verification and running a manual check as a final step in the process. According to Tomas, the human touch offsets the biggest technical flaw of AI: its inability to adapt to new types and sub-types of fraud tactics. “As long as we can establish the traits and nature of a certain attack, we can train AI to recognize it and block such attempts in the future. But fraudsters are constantly coming up with new ways to challenge remote ID verification. Having a manual element in the onboarding process is the best way to detect and block previously unseen attempts, as well as collect data for retraining the algorithm,” Tomas says.

It’s also the most natural way to keep AI algorithms up to date, but according to Tomas, the influence of AI in the onboarding process goes much deeper than that. “We know what happens during the onboarding process – we collect information, AI processes it, then we have the manual check, and then comes the final decision: pass or fail/reapply. That said, we can’t really explain the “how” part when it comes to AI-based ID verification,” he notes. Indeed, to humans, AI is like a black box. But there might be another way to deconstruct it, and that’s where technological progress takes a really interesting turn.

 

Understanding AI’s “thought process” – what‘s beneath the surface?

As a black-box solution, AI makes up only a part of the hybrid onboarding process, leaving some space for adjustment and decision-making during the manual check. In a fully AI-based process, however, the need to understand the reasoning process is unavoidable. „We‘re no longer treating computers as machines. As technology progresses, the main goal in terms of user experience is making computer-generated responses feel and sound as human as possible. AI does that too, but the “problem” is that it “thinks” like humans, too. If I arrive at a certain conclusion and someone else arrives at a different one from the same starting point, who’s to say I’m right and they’re wrong? The more abstract the input, the more “correct” answers we’ll have,” Tomas says. The more subjective the answer is, the more important it is to understand the thought process behind it. But one AI model can only have one designated function with current technology. In other words, explaining the thought process falls beyond the scope of AI that is programmed for ID verification. The solution here is to build an AI model that could understand and explain the decision-making processes of other AI in human terms. According to Tomas, this would be a big step toward an even wider application of AI – especially in heavily regulated industries.

 

Regulatory hurdles

This lack of understanding is, in fact, one of the major obstacles on the road to shaping a regulatory environment around AI application and development. Even though AI as a concept isn’t new, the scope and accuracy to which it’s applied today are unprecedented. From the option to use an AI-generated background on conference calls to AI-generated audio and masks that can make anyone sound and look like someone they’re not, there are dozens of concerning use cases that spark heated debates and calls for intervention by regulatory bodies. “If we let AI run free, we won’t be able to trust anything we see on the internet. Tags are one of the proposed ways to mark AI-generated content, and they could be effective, but first we need a regulatory base to rely on in their application and treatment of AI-generated data. With so much going on, it will take time,” Tomas notes.

Another hurdle is the regulatory misalignment on an international scale. According to Tomas, NFC chips, which are integrated into ID documents EU-wide, could serve as a reliable base for verification purposes. Even so, in some countries (France, for example) they’re off-limits for service providers, pushing them to resort to other, less reliable methods. 

 

AI-based vs. face-to-face verification. Is there a clear winner?

“Even though NFC could bring service providers one step closer in terms of fraud prevention, the cat-and-mouse game won’t end anytime soon,” Tomas says. There’s also a common misconception about face-to-face identity verification being the most secure method to ever exist. But if we’re being honest, when was the last time someone checked your ID with a magnifying glass, a UV light, and a torch? Even if we do that, what happens when we encounter identical twins?

“If we were to go back to face-to-face identity verification, we wouldn’t have to worry about the validity of liveness checks or selfies. But the world is going in the opposite direction, and the importance of having future-proof methods for remote ID verification will only continue to grow. Given that, we should focus on making ID verification methods even more secure and shaping a regulatory climate that reflects the current situation, supporting service providers in their efforts to prevent identity theft and fraud. As a powerful and easily adaptable tool we already know it to be, AI will definitely play a big part in this progression,” Tomas notes.

 

Learn more