Deepfake Scam

“Seeing isn’t believing.”

Nowadays, a new deepfake video is made public almost every day, demonstrating that we truly cannot rely on what we see or hear.

Deepfakes use AI applications (deep learning) to create convincingly real looking images, voice recordings and videos. They are based on existing video footage, sound material and images of a person. Popular victims of this technique are politicians, actors and other public figures as there is plenty of footage of them available. This footage can be used, for example, to replace faces in videos with those of other people. Facial expressions are imitated convincingly well in this process. In the same way, voices, speech melody and speech flow of a recording can be changed and imitated. It is thus possible to put words into somebody’s mouth using deepfakes, you only need enough reference material of the alleged speaker. Deepfakes enable audio-visual fake news that are currently reaching a new level of credibility.

Deepfakes can also be used in fraud cases. At least one case of this kind is already known: In the so-called CEO Fraud fake messages (usually e-mails), that are made to look as if they were sent by the management, are used to induce payments to alleged suppliers (in reality, the account of the fraudsters). In March of this year, an energy company fell victim to such a CEO Fraud. However, the request for payment did not arrive via e-mail but via phone. The manager of the British company received a phone call from his supervisor, the managing director of the German parent company. Both speech melody as well as the accent corresponded to the German CEO but were most likely imitated using AI. The UK company has dutifully executed the payment of 220,000€ to the alleged supplier due to the credibility of the call.

The good news is the detection of deepfakes is a highly researched topic as well. AI cannot only create deepfake counterfeits but can be used to expose them as well. An approach called Face Forensics, for example, can detect counterfeits, or more precisely altered regions in videos, using a large database of video pairs (fake and original videos).