What are Deepfakes, Their Threats, and How to Avoid Them?

Shanika W.

By Shanika W. . 11 February 2024

Cybersecurity Analyst

Miklos Zoltan

Fact-Checked this

In this guide, we will talk about what deepfakes are, how they can seriously harm individuals and institutions, and how you can protect yourself and avoid them.

What are deepfakes?

Deepfakes are fake media content created by manipulating existing media using deep learning techniques. The main technologies used in creating Deepfakes are autoencoders and Generative Adversarial Networks (GANs). Face-swapping, Face re-enactment, Audio Deepfakes, Lip-syncing, and Puppet-master are among the popular types of Deepfakes circulating on the internet.

In this complete guide, you will learn about:

  • What Are Deepfakes?
  • How are Deepfakes created?
  • Different types of Deepfakes
  • What are their threats?
  • How can you identify a Deepfake?
  • How to keep yourself protected from Deepfakes?

Since its first emergence, many individuals, companies, and governments have become targets of fake media incurring financial losses and reputational damage. Because anyone can create a deepfake with the available software applications, anyone can become a target for them.

Thus, knowing how to spot a Deepfake at a glance is good. Moreover, since the threats of deep fakes can invade many other areas, researchers are trying to innovate more accurate deepfake detection methods.

Summary: This article delves into the world of deepfakes and the significant risks they pose to individuals and organizations alike.

Deepfakes are synthetic media clips fabricated by altering genuine media through advanced deep learning techniques, such as autoencoders and Generative Adversarial Networks (GANs).

It outlines the various forms of deepfakes and their possible dangers, including fraudulent schemes, extortion, manipulation of biometric systems, explicit content creation, political manipulation, and social engineering tactics.

Furthermore, the article offers guidance on recognizing deepfakes and strategies for defending against them, including educating oneself and others, consulting trustworthy news outlets, and employing technological solutions designed to identify deepfakes.

What are Deepfakes

What Are Deepfakes?

Deepfakes are false media content created by manipulating a person in an existing image or video using powerful machine learning methods.

For example, in 2018, a fake video showed Barack Obama scolding Donald Trump, which became viral. The video’s main purpose was to show the consequences of Deepfakes and how powerful they are. In 2019, Mark Zuckerberg appeared in a fake video about how Facebook controls billions of user data.

The term ‘Deepfake’ has been formed by combining the words “deep learning” and “fake” because it leverages deep learning architectures, a branch of Machine learning and Artificial Intelligence.

They are created by training using autoencoders or GANs to create highly deceiving media. Anyone can create a deepfake and make people believe it is real when it is not, which is the real dangerous part of such fake media.

Who are the targets of Deepfakes?

First appearing in a pornographic video with a celebrity in 2017, they quickly increased in many areas like politics, finance, news, etc.

Many celebrities have found themselves on the internet in pornographic videos, and political leaders have been seen in the news saying words they have never spoken before.

Deepfake generation techniques usually require many images and videos of the targets. High-profile persons become common targets because of the large data sets available on the internet. Thus, deep fakes can seriously harm the reputation of any high-profile person.

Today, popular Deepfake software applications like FakeApp, DeepFaceLab, FaceSwap, and ZAO are easily available for anyone to make a deepfake.

Types of Deepfakes

Deepfakes mainly fall into different categories like:

  • Face-swapping where two images swap together to a fake image
  • Face reenactment – changing the facial features of a person
  • Audio Deepfakes, which are fake audio of a particular person
  • Lip-syncing Deepfakes – videos with consistent mouth movements with an audio
  • Puppet-master – videos of a particular person (puppet) are animated, always using movements of another person (master) sitting in front of a camera.

How Deepfakes are Made

Autoencoders and GANs are the two deep learning technologies behind the deepfake applications that have developed so far.

Autoencoders

Autoencoders mainly use face-swapping Deepfakes. To make a Deepfake video of someone, first, you need to train an autoencoder with two parts: an encoder and a decoder.

This technique usually uses two encoder and decoder pairs. You need to run many images of the two people you want to swap using the encoder. To make these images more realistic, images need to consist of face shots from different angles and lighting.

During this training, the encoder extracts the latent features of the images or reduces them to a latent representation compressing the images. Then the decoder will reconstruct and recover the images from this latent image representation.

For example, suppose you trained the image of a person number 1 using decoder A. Decoder A then reconstructs the image of person number two using the features of person number 1. Then you can use decoder B to recover that image.

When you complete the training, swap the two decoders to recover two different images, ultimately swapping the images. The ZAO and FakeApp software are popular swap-based which are very effective in the realistic generation of images.

Generative Adversarial Networks (GANs)

GANs also consist of two algorithms: the generator and the discriminator that work against each other. First, the generator creates new images from the latent representation of the source material.

The discriminator algorithm then tries to deduce if the correct image is generated to detect defects. Because of this reason, the generator will create images as real as possible.

A lot of deepfake services can be found on the dark web.

How Deepfakes Can Become Threats

Deepfakes can become serious threats to individuals, businesses, and public institutions.

The applications of Deepfakes

There are areas where people use deep fakes to improve their productivity. For example, moviemakers and 3D video creators have reduced production time using deepfake techniques. They can also be used purely to entertain a larger group audience.

But, the ultimate motive behind the majority of deepfakes is to manipulate the audience and make them believe something that has never occurred or been said by someone.

The creator falsifies the data and spreads false information to many users for different malicious intents. For example:

  • Scams and Blackmails
  • Bio-metric Manipulation
  • Pornography
  • Gaining Political Advantages
  • Social engineering

Among all these examples, Deepfakes can become a serious threat to the personality and reputation of individuals, sensitive data like financial information, cybersecurity, political elections, and many more.

This misuse can play out in scams against individuals and companies, including on social media.

Scams

In 2019, The Wall Street Journal reported that a U.K.-based energy company’s CEO was tricked into transferring €220,000 by a fraudster to a Hungarian supplier over the phone.

The fraudster has reportedly used audio deepfake technology to mimic the voice of the company’s parent company’s CEO to order the payment.

Audio Deepfakes are the most popular types of Deepfakes used for scams that make people believe they are talking to a trusted person. In most cases, Deepfake audio pretends that the person calling is a higher-profile figure of an employee like a CEO or a CTO.

Threats for businesses

Now, remote working is on the rise due to the COVID-19 pandemic. Thus, there is an increase in the businesses undertaken via video conferencing or over the phone, making them more vulnerable to such scams.

Businesses are at a high risk of financial losses and tarnish their images because of Deepfakes. They are unknowingly helping scammers to commit fraud which can even put them into unwanted lawsuits.

Biometric manipulation

There is widespread usage of biometric technology as a secure access method in many organizations. Deepfakes have the potential to make serious impacts on this technology if they are compromised.

Because biometrics grant access to restricted places, compromising face scanners will provide unauthorized access to those restricted areas.

Social media manipulation

Deepfakes are popular on social media platforms and have been designed to trigger reactions among people and maximize page reaches. Suppose there is a Facebook page posting Deepfakes related to political figures or any celebrity and making others post outrageous comments, creating havoc.

Besides, can you guarantee that any profile connects to an actual person? Maybe not. The profile picture you see on that Facebook account could be a deepfake. If so, whatever they’re sharing on their profile likely isn’t real either.

Threats for politics

Another area threatened by the use of deepfakes is manipulations within politics. The freely available Deepfake creation software makes it easier to create and distribute it among a wider audience.

Because of this advantage, anyone can use deepfakes to provide false information to the public to gain political advantage, especially during election times.

Gaining political advantages

One prominent example is the circulation of a fake video of an American politician, Nancy Pelosi, on social media. She appeared to be speaking as if she were intoxicated.

Also, former American President Donald J. Trump shared the video on his social media accounts, hoping to change the public image of Nancy Pelosi, his political opponent. As a result, the video had more than 2 million shares and views on social media.

Damaging bilateral relationships among countries

Threats from deepfakes are not only limited to political relationships between people of one country. It could even go beyond a national boundary to tarnish the relationship between countries.

For example, in 2020, the Australian Prime Minister, Scott Morrison, demanded an apology from China because of a fake tweet that showed an Australian soldier threatening to kill an Afghan child by holding a knife to his throat.

The image provoked anger online and temporarily damaged the bilateral relationship between the Australian and Chinese governments. Therefore, many authorities have shown the need to control deepfakes in social media platforms targeting political advantages.

How to Avoid Deepfakes

Several methods can help identify deepfakes.

How can you identify a Deepfake?

If you spot someone appearing in a video doing something unusual, always check for the following characteristics yourself. Because still, the Deepfakes videos are at a stage where you can spot the difference by carefully looking at the following signs:

  • Facial Expressions that do not look natural.
  • Lightning changes too often.
  • No eye movement, no blinking or blinking unnaturally.
  • Changes in skin tone.
  • Lack of emotions in the face.
  • Unnatural positioning of facial features.
  • Unnatural body posture and movement
  • Image Blurring.
  • Image misalignment.
  • Poor lip-syncing.
  • Hashtag or fingerprint changes. Video creators can use hashtags or digital fingerprints at defined places throughout a video to prove authenticity. If the hashtags change, it may indicate that the video is manipulated.

What can you do to avoid Deepfakes?

  • If you’re watching a controversial video sharing online on social media, always check for its source and search if it is by a reputable person before sharing or believing the content in the video. The same goes for any suspicious audio call you might get from even someone senior to you. Although it cannot guarantee that you will never get caught in such a scam, it will help avoid many scams.
  • For businesses, having strong security checks integrated into any process related to financial data will immensely help stop deepfake scams.
  • Educate yourself, others, and employees of your company on how deepfakes work, their threats, and how to identify a deepfake.
  • Always rely on reliable news sources.

Tools and Technologies that help detect Deepfakes

Summary

Deepfakes have become one of the biggest threats to many people worldwide. With the growing usage of social media content, Deepfake creators will continue to build up more quality deepfakes that are difficult to detect.

Thus, deepfake detection technologies must develop continually, and governments must regulate deepfake usage on social media. To avoid falling into such traps, make sure to follow the tips listed in this article.

Frequently Asked Questions

Some people found answers to these questions helpful

How many pictures do you need for a deepfake?

The accuracy and quality of a Deepfake image heavily depend on the number of target images used to train the deep learning model. The images also need to have a wide range of facial features. It is better to use 300–2000 images of their face to recreate the images properly.


Are deepfakes legal?

There have been many US laws to regulate and monitor the use of deepfakes. For example, California has experimented with banning deepfakes and passed a law preventing them from influencing elections.


When was the first deepfake created?

The first deepfake was created in 2017 by a Reddit user, who called himself Deepfakes, but researchers first invented it in 1990.


Are deepfakes easy to make?

Yes. Deepfake videos are so easy to create that anyone can create one. There are several deepfake creation software like FakeApp, DeepFaceLab, and FaceSwap, and you may find a plethora of tutorials available for creating them with a few easy steps.


Leave a Comment