Deepfake Video of Mark Zuckerberg Goes Viral on Eve of House A.I. Hearing

Move over, Nancy Pelosi.

A “deepfake” video featuring the likeness of Facebook CEO Mark Zuckerberg declaring “whoever controls the data, controls the future” has surfaced, triggering a new round of questions (and smirks) about how to deal with the rise of doctored videos on the eve of a Congressional hearings on the matter.

Facebook was hit with harsh criticism last month when it refused to pull from its platform a crudely altered video of House Speaker Nancy who appeared to be drunkenly stumbling over her words. President Donald Trump shared the clip on Twitter with the caption, “PELOSI STAMMERS THROUGH NEWS CONFERENCE.”

The Pelosi video is likely to get plenty of attention at tomorrow’s hearing convened by the House Intelligence Committee “on the national security challenges of artificial intelligence (AI), manipulated media, and ‘deepfake’ technology.” The House committee is concerned not only with the national security implications, but, it said in a statement, with “democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.”

At the rate the technology is progressing it won’t be long before board rooms take up the matter too. Adobe Research and others demonstrated just this week a deepfake tool powered by text-to-speech machine learning algorithms that can literally put words in the mouth of whomever appears in a video.

The digitally altered Zuckerberg video, which runs less than 20 seconds, appeared on Instagram over the weekend. To make the fake, two British artists used AI tools developed by the Israeli digital media company, Canny AI, which runs with the prominent tagline “Storytelling without Barriers” on its homepage. The video languished in obscurity at first, but then went viral in recent hours, collecting more than 30,000 views and counting. By this morning, discussion of the Zuckerberg deepfake was trending on Twitter.

In the video, Zuckerberg appears to be speaking to CBS News. A banner saying, “Zuckerberg: Announces New Measures to ‘Protect Elections’” appears at the bottom of the screen. But his words tell a different story. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” he begins. A CBS spokesperson told Fortune that “CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark.”

The video is unlikely to hoodwink anyone, but it’s hardly a flattering look for Facebook and Zuckerberg. Deepfake Zuck, as the AI-powered character is being called, “looks quite a bit like a Weekend At Bernie’s-style corpse-marionette,” Gizmodo quips.

In a statement emailed to tech journalists, a spokesperson for Instagram said they will leave the fake video up—for now. “We will treat this content the same way we treat all misinformation on Instagram. If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

The video emerged just before the release of a new report on deepfakes and synthetic media by Witness, a human rights organization that has been organizing training sessions with journalists and social justice activists on how to use the latest technologies to safely report on abuses or power. With deepfakes, the concern among activists and journalists is that the technology will be used to discredit the authenticity of their work and even attack them personally, said Sam Gregory, the program director at Witness. “This is an example of an emerging threat” to spread misinformation and doubts about human rights work, he said.

One of the problems Gregory sees around deepfakes is what he calls “the tools gap.” The technology to build deep fakes is ramping up quickly; you don’t even need to be technically savvy to use them. But there are far fewer resources available to detect the fakes once they’re in the wild. Witness has discussed with technology firms the importance of them sharing the underlying training data that goes into their deepfakes algorithms. “As companies release products that enable creation, they should release products that enable detection as well,” Gregory says.

The role technology companies play in the proliferation of this technology will be a big talking point at tomorrow’s hearing on the Hill. Scheduled to testify are law and IT professors, plus a policy advisor at OpenAI, an AI think tank funded by Reid Hoffman’s charitable foundation and Khosla Ventures.

Facebook, which has been embroiled in the deepfakes controversy since the emergence of Pelosi video, is not scheduled to send anybody to participate in the hearings.

More must-read stories from Fortune:

—A red flag to investors: The stock market may be hitting the “triple top”

—The Renault deal is dead, but Fiat Chrysler still needs a partner

—Many economists think the next recession will be before the 2020 election

—The S&P 500 has performed far worse under Trump than Obama

—Listen to our new audio briefing, Fortune 500 Daily

Don’t miss the daily Term Sheet, Fortune‘s newsletter on deals and dealmakers.

Advertisement