Deepfakes | Sunday Observer


 eepfakes, a name created through the portmanteau of ‘deep learning’ and ‘fakes’, refers to media where artificial intelligence has been used to make it seem like someone is doing or saying something that they did not really do or say. Deepfakes use machine learning and artificial intelligence to give anyone the ability to convincingly doctor footage quickly and with practically no cost. The term was first coined by a Reddit user of the same name nearing the end of 2017 and while the concept existed far earlier than that, the potential of that technology to be easily accessible and highly convincing was made clear with their creation of the r/Deepfakes subreddit around the same time.

Initially, the subreddit simply used deepfakes to create pornography by face swapping public figures and celebrities on to adult videos, which on its own is quite damaging and brings up issues of consent but by the definition of deepfakes itself; it is quite clear the far greater unprecedented threat this technology poses to the world, especially in an era where fake news and an overall distrust of media has been a major social issue. At the moment, Deepfakes aren’t perfect but it is improving to the point of being indistinguishable from reality at a rate that could prove to be dangerous in no time at all. The threat was such that director Jordan Peele worked with BuzzFeed to make a proof-of-concept deepfake video of former American President Obama delivering Public Service Announcement on the threat of deepfakes and its realism exemplified the issue of not just anyone but even political figures becoming the victim of convincing deepfakes.

However, the true threat is at the ease of which it is to make a convincing deepfake and that is because the concept behind the technology is simple. Take two neural networks and feed them training data, which, in the use of deepfakes are pictures and videos of the person the users wish to deepfake. Then, one network will attempt to make a deepfake with the given training data and the other will compare that deepfake to the existing training data and guess if it is real. These two networks will compete with each other to succeed and with each iteration will make the first network, the Generative network, better; creating more convincing deepfakes. For the second network, the Discriminatory network to work well, a large amount of training data is needed, which is why celebrities are the most common victims for deepfakes.

Following the creating of FakeApp, a deepfake application created by the r/Deepfake community in 2018, convincing deepfakes were possible with the click of just one button and the number of content doubled in 2019. Though not a month later, r/Deepfakes and all associated deepfake subreddits were banned from Reddit for violating the site’s content policy against involuntary pornography, a stance that Twitter took as well, taking down all deepfake pornography on the site. Lawmakers globally have begun to take notice but people don’t always play by the rules and making deepfakes illegal would only make it harder to find.

Of course, as a technology, it does have positive applications. Prajwal Renukanand the International Institute of IT in Hyderabad, India developed a form of deepfake that automatically translates a video of someone speaking from one language to another, down to the lip movements. A shortlived trend had the ever memetic Nicholas Cage’s face being face swapped onto movies he never appeared in, such as in the place of Amy Adams in Man of Steel. ZAO is a Chinese exclusive deepfake application that face swapped a user’s face on to a video from a preexisting library of videos that users could not add to, primarily movie scenes. The app was known for its superior speed contemporaries but prioritized looking good over accuracy.

However, for every Derpfake there are a thousand more DeepNudes and for every small advancement to media consumption there is a threat to democracy. It is difficult to argue for the benefits of deepfakes when the negative aspects of it are so readily apparent. Even the Doomsday Clock, a representation of how far humanity is from a hypothetical global catastrophe maintained by the Bulletin of Atomic Scientists, or ‘Midnight’, set the 2020 Doomsday Clock to 100 seconds from Midnight, the highest it has ever been, and the rise of deepfakes and fake news was attributed to being a factor, blurring the lines between truth and lies.

Already, people and organizations have begun to try and combat this new phenomena by raising awareness and teaching people how to look out for deepfakes. Some such techniques include, checking video sources, slowing down the video and looking closely at the speaker’s mouth. However, these techniques are only a temporary solution and even older techniques like watching how often a person blinks, soon became outdated with the advancements made by deepfakes towards perfection. So, unless something is done to help prevent this in a meaningful way, it would soon become impossible to differentiate between reality and fiction.