Majority of people believe everything they read or see on the internet. With AI anyone’s voice or face can be recreated with 100% accuracy. Products like Deepfake can be used for misinformation, memes, and illegal activities.
Deepfakes are easy to make, and it’s growing in popularity. The term Deepfake describes the recreation of a human’s face and/or voice with the help of artificial intelligence. The worst part is that anyone can create DeepFake. As of right now, the majority of deep fakes are just memes of Nicholas Cage, public service messages, and disturbing celebrity porn. Deepfake is mostly used to spread false information and cause harm to others.
In 2018, Rana Ayyub an Indian journalist became a victim of deep fakes. The DeepFake included her face overlaid on a porn video. This later on led to threats and online harassment.
It is slowly becoming impossible to spot DeepFake . It is now possible to make a DeepFake with just a single photo and using software like Lyrebird you can clone the person’s voice in just a few minutes. It is impossible to spot the difference.
Can you spot the difference?
Theoretically, with the help of AI, blockchain, and algorithms we can fight against DeepFake. How? AI can scan videos and scan for DeepFake materials, and blockchain technology which is installed across operating systems can flag users or folders that have gone through the DeepFake software. However, researchers are not sure if these methods could help, because every year detection software and DeepFake grow better and better. We need to stop this before it causes more harm to us.