How You Can End Up in a Porn Without Even Knowing It

Photo Illustration by Elizabeth Brockway/The Daily Beast/Getty
Photo Illustration by Elizabeth Brockway/The Daily Beast/Getty

A dangerous new threat to civil liberties is brewing online—and it starts in porn.

To be more precise, it starts in a new type of non-consensual fake porn of women. It is “new” in the sense that it is generated or manipulated by AI. These creations are a cut above a Photoshopped image of a woman’s face stuck onto a porn star’s body. Thanks to recent advances in deep learning, AI can now be trained to generate (wo)men in scenes that they never inhabited. It can reimagine them as living, breathing…and fucking. This type of AI-powered porn is better known by its colloquial name: deepfakes.

<div class="inline-image__credit">Hachette Book Group</div>
Hachette Book Group

Get a copy of Nina Schicks DEEPFAKES here.

In its earliest incarnation, deepfakes emerged on Reddit at the end of 2017. An anonymous Redditor figured out to harness open-source tools emerging from the AI research community to make fake porn. His creations were an immediate hit. When he revealed how he made them, he caused a frenzy on Reddit. Copycat imitators immediately started making their own deepfake porn. Reddit quickly tried to shut it down, but it was too late: the nuclear code had been released.

Less than three years on, and deepfake porn has spawned its own unique internet ecosystem. It is also still an exclusively gendered phenomenon. Sites featuring deepfake porn of every (female) celebrity imaginable—from Emma Watson to Ann Coulter—are easily accessible with a few clicks on Google.

How the NXIVM Sex Cult Defended Trump from Media Attacks

It is not only celebrities who are targets, however. All women are. A growing online marketplace for the production of deepfakes means that certain “deepfake artists” can be commissioned for more bespoke creation. Fees range from $20 to a couple thousand dollars. Experts estimate that there are already over 40,000 deepfake porn videos online. Applying Moore’s Law to deepfakes, they believe the number is doubling every six months. By next summer, it will be 180,000. By 2022, it will be 720,000

Until recently, creating this type of synthetic media would have been the exclusive domain of a Hollywood studio, or someone with access to a lot of special effects artists and money. The democratizing power of AI, however, is tearing down the barriers to entry. It is only a matter of time until everyone will be able to create their own bespoke deepfakes.

While AI-generated synthetic media will have many positive, creative and commercial uses, as deepfake porn illustrates, it can also be weaponized. AI can be trained on anyone’s likeness, so everyone’s identity can be misappropriated. All that is needed is some “training data”—in this case, that means images, video or audio of the intended target.

Already we are starting to see how that is percolating beyond porn. Last year, the first serious reported case of deepfake audio fraud emerged. In August, The Wall Street Journal reported that a British energy company had lost €250,000 through the use of deepfake audio. Scammers had allegedly used AI to mimic the voice of the company’s CEO demanding that the cash transfer be made. Of course, libel, identity theft and fraud are nothing new, but the universal access to this level of sophistication (where AI will literally re-create your biometrics) for next to no cost is a major concern.

This, then, is the alarming reality: a world in which our identities can be hijacked by almost anyone and used against us in deeply damaging ways without our consent or knowledge. But perhaps most devastating of all, deepfakes also pose a risk to civil liberties, because they risk invalidating all documented evidence of wrongdoing. In a world where anything can be faked, everything can also be dismissed as fake.

Inside the Deep, Dark Roku Underworld Featuring Terrorists, Conspiracy Nuts, and Bill O’Reilly

This could have calamitous consequences. Human-rights abuses captured on film will simply be dismissed as deepfakes. Imagine how China will respond in the future when footage of its documented human-rights abuses of the minority-Muslim Uighur population leaks? Up to a million Uighurs are thought to be detained in prison camps in the Western province of Xinjiang today. China claims they are there voluntarily for “re-education.” What will the Communist Party of China say when all visual media can be dismissed as fake? They have already instituted a law “banning deepfakes.”

This is a concern for the Western world too, including the United States. As our politics become increasingly partisan, the very notion of “reality” will be further undermined by subjective spin. Take for example the horrific video of George Floyd’s death that sparked a global movement against racism. Some people are already dismissing it as a deepfake.

That includes Winnie Heartstrong, a Black Republican candidate running for Congress. In June, she took to social media to share a 23-page document, in which she claims that the Floyd video is a deepfake hoax involving a former NBA player and game-show host. Her wild “theory” may be easily dismissed today, but as deepfakes become ubiquitous, it become much harder to hold those posing a threat to our civil liberties to account.

It is difficult to imagine a more serious challenge to the sense of an objective and shared reality that is needed to keep society cohesive. What appears at first glance to be a particularly tawdry women’s issue is, in fact, a harbinger of what is going to emerge as a much broader civil rights problem. It is just that as so often in history, women are on the frontlines of this new emerging threat.

Nina Schick is the author of DEEPFAKES. For more from Schick, you can visit her website.

Read more at The Daily Beast.

Get our top stories in your inbox every day. Sign up now!

Daily Beast Membership: Beast Inside goes deeper on the stories that matter to you. Learn more.