Deepfake technology is a video manipulation technique that employs high-powered computers & deep learning. As a consequence, a highly realistic-looking movie of a never-happening incident has been created.
How Do Deepfakes Work?
Two machine learning (ML) models are used in a deepfake video. The forgeries are created by one model using a data collection of example movies, while the other attempts to recognise if the video is a fake. When the second model can’t detect if the video is phoney, the deepfake is presumably convincing enough to a human viewer as well. The term for this technology is generative adversarial network (GAN). In this definition, you may learn more about GANs.
When the data set that GAN can work with is huge, it performs better. As a result, politicians and Hollywood figures frequently appear in early deepfake footage. GAN can utilise many of their movies to make incredibly realistic deepfakes.
How Did Deepfakes Start and Who Created Them?
When a Reddit user entitled “Deepfakes” stated that he had constructed a machine learning (ML) system that could flawlessly swap celebrity faces onto porn films, people became aware of deepfake technology. Naturally, they provided samples, and the discussion quickly grew in popularity, generating its own subreddit. The site operators had little choice but to shut it down, although the technology had become well-known and widely available by this point. It wasn’t long before it was being used to make phoney videos, typically starring politicians and actresses.
The concept of modifying videos, on the other hand, is not new. Some institutions were already doing considerable academic research in computer vision in the 1990s. During this period, most of the focus was on modifying existing video footage of a person speaking and mixing it with a distinct audio track using artificial intelligence (AI) and machine learning (ML). This technique was shown in 1997’s Video Rewrite programme.
What are the Dangers of Deepfake Technology?
Deepfake videos are now engaging and entertaining to watch due of their novelty. But there’s a risk lying beneath the surface of this seemingly funny technology that might spiral out of control.
Deepfake technology is progressing to the point where it will be difficult to distinguish between fake and real videos. This might be terrible for public personalities and celebrities, in particular. Malicious deepfakes have the potential to jeopardise careers and even ruin lives. People with nefarious motives might utilise these to mimic others and take advantage of their friends, relatives, and coworkers. They can potentially provoke international problems and even wars using false footage of world leaders.
Is It Possible to Detect Deepfakes?
It may still be able to recognise poorly created deepfakes with the naked eye at this time. The absence of human characteristics like blinking, as well as elements that may be inaccurate, such as incorrectly oriented shadows, are dead giveaways that are generally simple to identify.
However, as technology advances and GAN algorithms improve, it may soon be difficult to detect whether a video is genuine or not. The first GAN component, which generates forgeries, will continue to improve with time.
That’s what machine learning is for: constantly instructing AI so that it improves. It will eventually surpass our ability to distinguish between what is genuine and what is not. Experts estimate that completely authentic digitally edited videos will be available in 6 to 12 months.
As a result, efforts to develop AI-based deepfake defences are ongoing. However, as technology advances, these countermeasures must develop as well. Facebook and Microsoft, as well as a number of other firms and colleges in the United States, have created a partnership to support the Deepfake Detection Challenge (DFDC). This effort aims to encourage academics to create technology that can detect whether or not artificial intelligence has been used to edit a video.
What Is a Shallowfake?
Shallowfakes are videos that have been modified with rudimentary editing techniques, such as speed effects, to make them appear to be real. When slowed down, certain shallowfake films make their subjects appear disabled, but when sped up, they appear unduly hostile.
A “dumbfake” is another name for a shallowfake. Other videos in this category were mislabeled to make it appear as if they took place somewhere other than where they actually did. Fake news may have fatal repercussions, as seen by the recent bloodshed in Myanmar.
What Is the Difference between a Deepfake and a Shallowfake?
Deep learning systems are not required to create shallowfakes. However, because shallowfake movies do not employ AI, they are not significantly different in terms of quality or number from deepfakes. The moniker simply refers to how the movie was created and which technologies (such as deep learning) were eschewed in the process.
Are Shallowfakes Easily Identifiable?
While it is simpler to detect whether a video is a shallowfake since it is more crudely produced than just a deepfake, politicians, professors, and other professionals feel it may still bring significant harm to the topic. Even if the original video is easily available on the Internet, the less astute may fall for and circulate fake information without hesitation.
Are Deepfakes and Shallowfakes Covered by Existing Cybercrime Laws?
Since 2019, California has rendered deepfake distribution unlawful. However, lawmakers have admitted that executing the legislation (i.e., AB 730), that makes it unlawful to circulate doctored films, photos, or music recordings of politicians within 60 days before an election, is difficult.
Many of their subjects’ reputations might suffer if the disinformation both from deepfakes & shallowfakes also isn’t handled appropriately.
So, that was all about Deep Fake technology, hope so you have got a good knowledge about the above mentioned concepts.