THE COLLEGE HILL INDEPENDENT


LEARNING FACES

The uncanny horror of deepfake videos

by Mina Rhee

Illustration by Ella Rosenblatt

published November 10, 2018


“Will ‘Deepfakes’ Disrupt the Midterm Election?” reads a recent headline from Wired. Two weeks ago, the New York Times published an opinion piece titled “Will Deep-Fake Technology Destroy Democracy?” Both articles are representative of the alarm that media and political organizations are expressing about the potential for ‘deepfakes’ to create and spread inflammatory fake news that has no way of being verified. ‘Deepfakes’ are videos that use deep learning (a type of machine learning) to graft different faces onto videos, and were created by and named after Reddit user “deepfakes,” who used machine learning technology paste the faces of female celebrities onto pornographic videos. Now, however, the alarm over deepfakes is primarily about their political and national security implications; the Times article, with many others, worries about the potential for deepfakes to create a world “in which it would be impossible, literally, to tell what is real from what is invented” — where manipulated videos of political figures saying inflammatory falsehoods can sway public opinion before they are shown to be fraudulent, or political figures can deny that real videos are accurate. Despite these worries, it seems like the most serious consumers of deepfakes on the internet use them for pornography––there are thousands of videos on multiple websites dedicated to deepfakes of faked celebrity pornography, but not a single instance of a deepfake political video that has been distributed widely as part of any misinformation campaign.

Interestingly, the technology behind deepfakes was used by computer scientists and academics long before a Reddit user decided to harness it for pornographic videos. In an interview with Motherboard, user “deepfakes” claimed that the deep learning model he used to create his videos was similar to one developed by researchers at NVIDIA, a technology company that sells computing hardware and software. When researchers like the ones at NVIDIA published their results, they included examples of images changed by deep learning neural networks: dogs whose breeds have been swapped, photographs in the style of different painters, a summer day turned into a snowy one––whimsical examples that are fun, but inspire little other reaction. In sharp contrast, watching a convincing deepfake provokes a knee-jerk horror––these videos are almost completely believable, like stumbling upon any clip while browsing through YouTube or Facebook. Unlike the sterile images of morphed landscapes in academic papers, deepfakes viscerally demonstrate that neural networks have the power to create a convincing reality.

The “deep” of deepfakes comes from deep learning, a form of machine learning that uses multiple layers of data manipulation for a computer model to learn how to perform a specific task. During training, a computer model will be tested on sample data to improve on a task. The model performs a series of transformations on the input and produces an output, and, based on the accuracy of the output, makes small changes to its transformation procedure. The idea is that through hundreds or thousands of iterations, the model will eventually figure out the correct transformation from input to output. As the name implies, deep learning uses multiple layers of transformations, each abstracting on its previous abstraction, breaking down the input into data points that the model deems relevant. These layers of transformations are called neural networks, because this process of abstraction is meant to mirror the way cells in the brain process information.

Nobody tells these neural networks how to make a correct deepfake—instead, the model learns for itself through endless trials in training. The most common way to generate deepfakes is using auto encoding, which ‘decodes’ a face to its generalities, and then ‘encodes’ it with the specifics of another person’s face. The layers of the neural network figure out how to strip a face into universal components––its expression, the lighting cast onto it, its angle––then also figures out how to build another person’s face onto this frame. Many aspects of this process are still unknown; the actual information about how faces are stripped then rebuilt is hidden in the layers of neural network training that creates the deepfakes.

Neural networks are fundamentally unknowable because the information they encode is so abstracted that it can’t be translated into something its creators can understand. It is terrifying that unknowable abstractions—that we currently have no way of reading—can produce images that pass for reality. Usually deepfake videos seem slightly ‘off,’ but it is difficult to point to an exact cause the way one can point to a stray line or fuzzy border when looking at a bad Photoshop. Identifying and tracing a flaw requires knowing how the neural network functions.

 

+++

 

Reddit user “deepfakes” began posting videos of hardcore pornography with the bodies of pornstars and faces of female celebrities onto the site in 2017. These videos garnered attention when Motherboard published an article titled “AI-Assisted Fake Porn Is Here and We’re All Fucked.” In a r/Deepfakes subreddit that grew to 90,000 followers, other users posted their own fake celebrity pornography they created using the same method. Following the outrage in response to deepfakes, Reddit, Pornhub, Twitter, and other websites banned deepfakes in early 2018, and both the subreddit and user are gone from Reddit. The majority of such websites released statements about how such content violated their policies on “involuntary pornography,” as a spokesperson from Reddit stated.

Although the original “deepfakes” thread and user have been banned from Reddit, using deepfake technology is still possible for anyone interested. A public GitHub repository, which is a public codebase, allows anyone to set up their own neural networks to swap faces of their choice on image or video. Included in these code files is a manifesto, whose first line is “Faceswap is not porn.” The manifesto goes on to highlight the philosophy behind deepfakes as well as some ethical guidelines for its users. The first statement is obviously true, but also necessary, because while faceswaps are indeed not pornography, the specific applications of deepfakes definitely began as pornography, with the codebase still thanking and crediting user “deepfakes” for making faceswapping technology accessible to more casually interested people outside of academia.

Another way to read the word “deepfake” is for “deep” to modify “fake,” — the deepest fake, the most fake. Although deepfakes that place a celebrity’s face onto a pornstar’s body are a step up from photoshopping them onto still images, the consumers of the videos know that they are still false. The fact that deepfakes are so unnerving reflects how strongly images are tied to our notion of reality: We accept videos or pictures of an event as proof of the event itself, so being confronted with an image that is clearly fake creates an extended sense of cognitive dissonance, as we try to resolve the believability of the video with the impossibility of its context. It’s also why deepfakes are probably more potent as pornography than they are as political weapons. Both deepfakes and pornography are forms that accept that the image viewed on the screen isn’t real but fantastical. Deepfakes drive the “fake” of fake celebrity porn videos somewhere deeper, so that while the fantasy holds at the level of technical image, the sense of unreality is still heightened because it is so convincing. Not all faceswaps are porn, but all deepfakes are pornographic in their own way, fantasy images that insist on their reality, created by an opaque structure that can understand and create our own faces better than we can.

 

Mina Rhee B’20 deleted her Snapchat after it added the faceswap filter.