
Sign up to save your podcasts
Or


Deepfakes, created using AI, are increasingly used in cyberattacks. These realistic forgeries, which can mimic voices and faces, are fueling a surge in fraud, with instances rising over 2000% in 2024. Cybercriminals employ tools like VoiceMimic and VideoMorph to create convincing deepfakes for scams, such as Business Email Compromise (BEC) attacks, exemplified by the $25 million theft from Arup. The report emphasizes the need for stronger authentication methods, employee training, and deepfake detection technologies to combat this growing threat. Ultimately, a multi-faceted approach is crucial for organizations to protect themselves from these sophisticated attacks.
By Callie Guenther, Senior Manager - Cyber Threat Research at Critical StartDeepfakes, created using AI, are increasingly used in cyberattacks. These realistic forgeries, which can mimic voices and faces, are fueling a surge in fraud, with instances rising over 2000% in 2024. Cybercriminals employ tools like VoiceMimic and VideoMorph to create convincing deepfakes for scams, such as Business Email Compromise (BEC) attacks, exemplified by the $25 million theft from Arup. The report emphasizes the need for stronger authentication methods, employee training, and deepfake detection technologies to combat this growing threat. Ultimately, a multi-faceted approach is crucial for organizations to protect themselves from these sophisticated attacks.