AI scammers are using deepfake technology to create fake job applicants and impersonate people for various frauds. Companies are experiencing increased AI deepfake fraud, with scammers targeting businesses, romance seekers, and financial institutions. Detection tools are being developed to combat this growing threat.
Artificial intelligence is revolutionizing the world of scams, making it easier than ever for fraudsters to deceive unsuspecting victims. The technology is now being used to create hyper-realistic video and audio to impersonate people, targeting everything from job applications to romance seekers and financial institutions.
A 2024 study revealed that half of the businesses surveyed had experienced AI deepfake fraud, highlighting the growing prevalence of this issue. The scope of these scams is vast and concerning, as demonstrated by a recent case where AI-generated applicants attempted to secure a job at Pindrop Security, a company specializing in detecting AI fraud.
Vj Balis Suburmanion, CEO of Pindrop Security, reports a staggering 900% increase in AI scams over just two years. The irony of fake applicants targeting a company that detects AI scammers underscores the brazen nature of these fraudsters. As Suburmanion explains, “These AI scammers are just applying to a wide variety of companies. Every fraudster is playing a numbers game.”
The threat extends beyond job applications. The Justice Department has accused North Korea of attempting to infiltrate more than 300 US corporations using fake job applicants, aiming to extort companies and gain access to private data. Suburmanion emphasizes that AI tools are being used for various scams, including romance scams, elder abuse, account takeovers, and hiring scams, fundamentally eroding trust across multiple sectors.
Perhaps most alarmingly, Pindrop reports that the biggest threat comes from AI bots impersonating customers to steal money and personal data from banks, insurance companies, and healthcare providers. The ease with which these deepfakes can be created is demonstrated by Pindrop’s team, who made a convincing fake of a CBS reporter in less than 10 minutes.
The sophistication of these AI-generated personas is increasing rapidly. In a demonstration, Pindrop’s Vice President D. Russell posed as the CBS reporter, applying for a job. While their technology immediately flagged it as synthetic, the casual observer might easily be fooled, especially in situations with poor video quality or audio-only interactions.
As these scams become more prevalent and sophisticated, the ability to distinguish real humans from AI-generated ones is becoming critical. Companies like Pindrop, Microsoft, Intel, and Sensity are part of a growing industry developing tools to detect AI fakes. However, the arms race between scammers and security experts continues to escalate.
For individuals concerned about falling victim to these scams, there are some basic precautions to take. When on a video call, asking the person to wave their arm in front of the camera can help identify a fake, as the AI may struggle to render this motion accurately. For voice calls, which can be nearly indistinguishable from real humans, it’s best to hang up and call the institution directly if you suspect any foul play.
As AI technology continues to advance, both in its capacity to create convincing fakes and in its ability to detect them, vigilance and awareness remain our best defenses against these increasingly sophisticated scams. The challenge for security experts is to stay one step ahead of the fraudsters, developing more robust detection methods as the AI-generated content becomes more realistic.
