Current State of Fraud: the Rising Threat of Video Injection Attacks

Share

Technological advancements and the bold presence of AI are having a ripple effect in every industry. But when it comes to preventing identity theft and fraud, the effect of having those tools at hand is very much double-sided. Staying one step ahead of fraudsters is a serious challenge to regulatory bodies and Trust Service Providers, who report more advanced and aggressive attempts year by year.

 

Drawing from some valuable insights shared during an ENISA workshop on remote video identification, our CPO Tomas Zuoza notes that the rising volume and diversity of fraudulent cases is a major concern. Indeed, iProov, a world-leading identity verification solution provider, reported a 295% increase in video injection attacks in the second half of 2022 (compared to the first 6 months of that year).

 

Technology enables high-level fraud with less effort

Using fake documents for authentication or using the document of someone who looks similar to the fraudster are some of the oldest tricks in the book. They make up the absolute majority of fraudulent cases in the virtual realm, too. But according to Tomas, the biggest challenge is the new types and sub-types of attacks that deviate from security algorithms. “I’d highlight two types of AI-powered injection attacks here. The first one targets liveness checks as fraudsters project someone else’s face onto their own. The second type is more common and targets documents, usually by altering their validity period or projecting security features when they’re not actually there,” he says.

Referring to the use of technology in fraudulent cases, the present moment feels like a turning point in fraud detection and prevention altogether. Whereas only a few years ago video injection may have required a team of skilled professionals operating in multiple countries at a time, today the same threat may come from a regular desktop set up in someone’s bedroom. Although the preparation may still involve some mechanical tampering with a mobile device (installing an additional wire to alter live camera output, for example), the main “actor” in modern video injection attacks is AI.

 

The science beyond fraud prevention

Although the types of fraud and the technical aspects behind each one of them are diverse, it’s possible to split them into “categories” based on some common characteristics. That’s where technology works in favor of Trust Service Providers. “As long as we define a certain strategy or approach that fraudsters use, we can establish an algorithm to detect those attempts. In such cases, automated checks are extremely effective and can identify even the most subtle traces of video injection that would be invisible to the human eye,” Tomas says.

But the tactics that fraudsters use are ever-changing. That brings up the pressure to constantly update algorithms and stay on top of the onboarding process by adding a manual element to it. “Algorythms detect patterns, and humans do a much better job at detecting unusual, new attempts to commit fraud. Besides, having a manual element in our identity verification process allows us to collect data and identify new, emerging patterns,” notes our CPO. Indeed, the hybrid approach is an industry standard for Trust Service Providers all over the world, because algorithms alone are insufficient.

 

As the power and accessibility of modern technology continue to grow, it’s likely that we’ll see entirely new types of fraud come into effect in the near future. But Trust Service Providers and regulatory bodies have been facing such growth for decades now, and recording the changes in fraud patterns as they take place has been a key factor in this cat-and-mouse game. “I don’t see the industry moving to a single data proofing method anytime soon,” Tomas agrees.

 

Learn more: