Cibersecurity Newsletter
AUGUST 2021

Deepfake

When looking for new cyberattack forms, invaders are using artificial intelligence more and more. One of the areas of great concern is the deepfake area, which uses AI to impersonate real audio and video of real persons which can deceive or help deceive careless users.

Deepfake

The term “deepfake” comes from the underlying technology “deep learning”, which uses machine learning and artificial intelligence to create the voice or a video of a synthetic human. Deep learning algorithms recognize patters from an extensive data bank that are then used to manipulate image, video, or audio, creating a false realistic appearance.

How is this used and what are its dangers?

Although the ability to swap faces automatically to create manipulated videos with a real, trustworthy look has some positively interesting applications (such as films and gaming), this is obviously a technology that brings some problems.

The use of deepfake to manipulate images from politicians and celebrities were the first, most controversial experiences with this technology. In 2018, for instance, a Belgian political party disseminated a video where Trump was giving a speech asking Belgium to withdraw from the Paris Agreement. Trump never gave that speech, it was deepfake. This is one of the dangers of this technology: disseminating false content as authentic content.

In a corporate setting, deepfakes boost social engineering when used to manipulate people to get unauthorized access to systems, infrastructures, or information. By using this technology, attackers can, for example, combine e-mail phishing with fake audio messages. By using multiple vectors, they are increasing the chances of deceiving users.

Image52

How to detect a deepfake?

As deepfakes become more common, society is more aware of this type of techniques and their use, being able to detect possible fraud attempts.

Image53

To protect yourself, we recommend that you pay attention to some of these indicators that are indicators of deepfakes:

check

Blinking: Current deepfakes face problems when it comes to animating faces in a realistic way and this results in videos where eyes never blink or blink a lot or in an unnatural way. However, since Albany University researchers published a study about detecting this anomaly, there were new deepfakes launched that no longer had this problem.

check

Face: Look for skin or hair problems or faces that appear blurrier than their environments. Face skin is too smooth or too wrinkly? Facial expressions seem uncanny? Facial hair (beard, moustache, eyebrows) seems real? Sometimes changes are minor and are barely noticeable.

check

Lighting: Lighting seems off? Are shadows on the right place? Deepfake algorithms usually maintain the lighting of the videos that were used as models for the fake video, which doesn't match the lighting of the video that was made.

check

Audio:Audio may not match the person, especially if the video was forged and the original audio was not manipulated thoroughly.

check

However, the most important rule is: if the content seems odd, suspicious, or in any way surprising or dubious, try to confirm the content with the sender, and if they have actually sent it.

Detecting deepfakes is a challenge. In home videos, flaws can be seen with the naked eye, but AI technologies are evolving so quickly that deepfakes are becoming more and more difficult to detect. There are those who believe that we will soon be depending on forensic specialists to detect whether those contents are true.

Archive

2024

2023

2022

2021

2020

2019

Subscribe our newsletter.


Cookie Consent X

Devoteam Cyber Trust S.A. uses cookies for analytical and more personalized information presentation purposes, based on your browsing habits and profile. For more detailed information, see our Cookie Policy.