Cyberattacks now include attacks against businesses that involve Deepfake voices!
The evolution of modern technology has brought many innovations, and one, in particular, is shaking up the media landscape: Deepfakes. Deepfakes are videos, images, or audio recordings that have been manipulated by AI technology. In a Deepfake, an individual can be presented as saying or doing something that didn’t happen.
Deepfake content is compelling, and the ongoing development of Deepfake tech has made it more difficult to discern between real and fake content. While Deepfake tech is still relatively new, we continue to see its role in emerging fraud and cybercrime trends. This has become a growing concern among consumers and organizations, as Deepfakes are exploited by criminals to carry out social engineering attacks, the spread of misinformation, and fraud scams.
According to Pfefferkorn, evidence tampering is one of the judicial system's major threats posed by deep fakes. Evidence in the court of law can be manipulated with the use of DT to sway a case one way or the other. Further issues may arise during cross-examinations when an offering party testifies affirmatively concerning details of a Deepfake video while the opposing party denies the contents of the video. This would negatively impact court cases because Deepfakes might cause additional caseloads, and cost money and time to verify and authenticate the evidence before it can be admissible in court. For example, in a UK child custody case, a deepfake audio file was presented as evidence to the court by the mother. The mother had used DT and tutorials online to create a plausible audio file that sounded like a recording of the father threatening her, to support her claim that he was too violent to be allowed
access to their children. However, after the file was forensically examined, it was proven to be fake and dismissed by the courts.
As a cyberattack investigator, Nick Giacopuzzi’s work now includes responding to growing attacks against businesses that involve deep faked voices — and has ultimately left him convinced that in today's world, "we need to question everything."
In particular, Giacopuzzi has investigated multiple incidents where an attacker deployed fabricated audio, created with the help of AI, that purported to be an executive or a manager at a company. You can guess how it went: The fake boss asked an employee to urgently transfer funds. And in some cases, it’s worked, he said.
"It's your boss's voice. It sounds like the person you talk to every day," said Giacopuzzi, who is a senior consultant for cyber investigations, intel, and response at StoneTurn. "Sometimes it can be very successful."
It’s a new spin on the impersonation tactics that have long been used in social engineering and phishing attacks, but most people aren’t trained to disbelieve their ears.