DeepFake Detection using Machine Learning and Python - HashDork (2024)

Table of Contents[Hide][Show]
  • So, what is DeepFake?
  • First Order Motion Model
  • Process of Creating Deepfake model+
    • Extraction
    • Training
    • Creation
  • Building Deepfake Detection Model+
    • Mounting and Cloning GitHub Repo
    • Importing Modules
    • Executing the model
    • Model creation
    • Deepfake detection
    • Enhancing output using absolute coordinates
  • What Are the Risks of Deepfake Technology?
  • Conclusion

It’s not new to have fake photos and videos. Since the widespread use of the internet, individuals have been creating forgeries meant to fool or amuse ever since there have been images and films.

However, there is a new type of machine-produced fakes that might someday make it hard for us to distinguish reality from fiction.

These fakes differ from the simple picture manipulations generated by editing software like Photoshop or the cleverly manipulated films of the past.

Deepfakes are the most well-known example of “synthetic media”—images, sounds, and videos that look to have been produced using conventional methods but were really made using sophisticated software.

Deepfakes have been around for a while, and while their most popular application yet has been to put the heads of famous people on the bodies of actors in p*rnographic films, they have the ability to produce convincing footage of anybody doing anything, anywhere.

In this post, we will be looking at Deepfakes, how it works, how you can generate them on your own, and much more.

So, what is DeepFake?

A deepfake—a combination of the phrases deep learning and fake—is a piece of synthetic media in which the resemblance of another person is used to replace that of a person in an already-existing photograph or video.

Deepfakes employ sophisticated machine learning and artificial intelligence techniques to modify and create visual and audio information that has a high potential for deception.

Deep learning methods like autoencoders and generative adversarial networks are the primary mechanism for deepfake production (GAN).

These models are used to analyze a person’s facial emotions and movements and synthesize face pictures of other people exhibiting comparable expressions and movements.

The use of deepfakes in celebrity p*rnographic videos, fake news, hoaxes, and financial fraud has drawn considerable attention. Both industry and the government have responded by trying to find them and limit their usage.

First Order Motion Model

When trying to develop deep fakes in the past, the issue was that we need some sort of extra knowledge, or priors, for these approaches to work.

As an illustration, face markers are required if we wish to trace head movement. Pose estimation was necessary if we wished to map whole-body motion.

That changed at the NeurIPS conference last year when the research team from the University of Toronto presented their work, “First Order Motion Model for Image Animation.”

No further knowledge of animation is necessary for this approach. In addition, after this model has been trained, it can be used for transfer learning and applied to any item falling under the same category.

Let’s look at this method’s operation a little further. Motion Extraction and Generation make up the first half of the entire process. The driving video and source pictures are utilized as inputs.

DeepFake Detection using Machine Learning and Python - HashDork (1)

To extract first-order motion representation, which consists of sparse key points and local affine transformations, a motion extractor uses an autoencoder to identify key points.

To create a dense optical flow and occlusion map with the dense motion network, they are employed along with driving video. The generator then renders the target picture using the outputs from the dense motion network and the source image.

Across the board, this work performs better than the state of the art. It also contains features that other models just do not have. It works on several picture types, so you can apply it to images of the face, body, cartoons, etc., which is extremely great.

Many new opportunities are created by this. Another ground-breaking aspect of our strategy is that it now allows you to produce high-quality Deepfakes using just one image of the target object, similar to how we do with YOLO for object recognition.

Process of Creating Deepfake model

Three processes are necessary for deepfake generation: extraction, training, and creation. The main points of each of these stages and how they relate to the overall process will be covered in this section.

Extraction

Deepfakes use deep neural networks to change faces and need a lot of data (pictures) to operate correctly and convincingly. The extraction process is the stage in which all frames from video clips are extracted, the faces are recognized, and the faces are then aligned to maximize performance.

Training

In the training phase, the neural network can change one face into another. Depending on the size of the practice set and the training gadget, the training can take several hours or even days.

The training just has to be finished once, much like most other neural network training. After training, the model would be able to change a face from person A to person B.

Creation

After the model has been trained, a deepfake might be produced. Frames are taken from a video and then aligned to all faces. The trained neural network is then used to transform each frame.

The transformed face must be merged with the original frame as the last step.

Building Deepfake Detection Model

Mounting and Cloning GitHub Repo

Being able to use Google’s GPUs for free while working at Colab is advantageous for deep learning. An additional advantage is a capability of mounting a Google Drive on a cloud virtual machine (VM).

With easy access to all of his stuff, the user is enabled. The program needed to mount Google Drive to the virtual machine in the cloud will be found in this section.

DeepFake Detection using Machine Learning and Python - HashDork (2)

DeepFake Detection using Machine Learning and Python - HashDork (3)

DeepFake Detection using Machine Learning and Python - HashDork (4)

Importing Modules

Now, we will import all the necessary modules.

DeepFake Detection using Machine Learning and Python - HashDork (5)

Executing the model

We’ll use an example that combines a still photo of Putin (source picture) with a video of Obama. The result is a video of Putin speaking and gesticulating with the exact same facial expressions that Obama used while driving.

Prior to displaying the result of the model, the media will be loaded and the functions will be declared. Checkpoints will then be loaded and the model will be constructed. After creating the deep fake, two different styles of animation will be displayed.

DeepFake Detection using Machine Learning and Python - HashDork (6)

Putin is animated by Obama’s movements utilizing relative keypoint displacement. The way Obama’s facial emotions and body language are depicted beautifully and clearly for Putin during his videos is astounding.

There are a few microscopic mistakes, particularly when Obama raises his eyebrows and blinks his eyes. These expressions are not exactly replicated in Putin’s frames.

Without the deepfake backdrop, the Putin film would appear to be fairly credible and authentic if it were to be viewed on TV or social media.

Model creation

Now, we will be using the pre-trained checkpoints to create a complete model.

DeepFake Detection using Machine Learning and Python - HashDork (7)

Deepfake detection

Relative keypoint displacement is used to animate the items in the cell below. The next cell uses absolute coordinates instead, but all item proportions will be taken from the driving video in this fashion.

DeepFake Detection using Machine Learning and Python - HashDork (8)

Enhancing output using absolute coordinates

DeepFake Detection using Machine Learning and Python - HashDork (9)

You will be able to develop a deepfake detection in this manner.

What Are the Risks of Deepfake Technology?

Deepfake videos are now engaging and entertaining to watch due to their novelty. However, there is a risk that might go out of control lying beneath the surface of this seemingly funny technology.

It will certainly be challenging to distinguish between fake and real videos as deepfake technology continues to advance. For prominent personalities and celebrities, in particular, this might have severe effects. Deepfakes that are intentionally malevolent have the potential to completely damage careers and lives.

These might be used by someone with malign intent to pass for others and take advantage of their friends, relatives, and coworkers. They are also capable of sparking worldwide controversies and even wars using phony films of foreign leaders.

Conclusion

In summary, we are in a weird period and unusual environment. More than ever, it is simple to produce false news and films and spread them. Understanding what is true and what is not is becoming increasingly challenging.

Today, it appears, we can no longer rely on our own senses.

Despite the fact that false video detectors have been developed, it is only a matter of time before the information gap is so narrow that even the finest fake detectors will be unable to determine whether the video is real or not.

DeepFake Detection using Machine Learning and Python - HashDork (10)

DeepFake Detection using Machine Learning and Python - HashDork (2024)

References

Top Articles
Latest Posts
Article information

Author: Annamae Dooley

Last Updated:

Views: 5614

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Annamae Dooley

Birthday: 2001-07-26

Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

Phone: +9316045904039

Job: Future Coordinator

Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.