Real-looking fake videos are about to sow chaos. We must do something to call them out | Opinion

The recent broadcast of a Venezuelan propaganda video, generated by artificial intelligence, showing a seemingly real American TV presenter painting a rosy picture of that country’s dictatorship was just a preview of what’s coming: The internet will be so flooded with top-quality AI-produced fake videos and photos that the average person won’t know what’s true anymore.

The video, one of several that ran on YouTube’s “House of News” channel before it was suspended by the social-media platform in March, showed the human-looking avatar of a U.S. newscaster saying reports of Venezuela’s economic crisis were “exaggerated.” Another fake AI newscast on that same channel claimed that Venezuela’s opposition leader Juan Guaidó allegedly was connected to a $152 million corruption case.

These deepfake videos got hundreds of thousands of views. Many Venezuelans believed — and may still believe — that these alleged newscasters were real journalists. In fact, they were avatars generated by the London-based artificial-intelligence firm Synthesia.

Hard to control

With the rapid expansion of ChatGPT, Synthesia and other tools powered by artificial intelligence, deepfakes are spreading much faster than efforts to control them.

In recent weeks, a fake image of Pope Francis wearing a trendy puffy overcoat went viral, as did fake pictures of former President Trump being arrested by police on the street.

Curious about how long it will take for the internet to be flooded with deepfake images, I called Rony Abovitz, the serial entrepreneur who founded, among other technology companies, the South Florida-based Magic Leap augmented-reality firm and the MAKO surgical-robots company, which reports having already performed more than 1 million surgeries.

Abovitz, a University of Miami graduate, is working on a new startup focused on a new architecture for the safe use of artificial intelligence. But he did not join Twitter’s owner Elon Musk and more than 1,000 technology leaders who recently signed an open letter calling for a six-month pause in the development of AI programs, saying that the new technologies pose “profound risks to society and humanity.”

“I did not sign it, because I didn’t think that this solution was effective,” Abovitz told me. “If we really want to solve this problem, we need to do something more intense. A six-month delay is not sufficient to address this issue.”

In his view, “we are at war” with an upcoming avalanche of realistic fake videos, photos and texts that will sow chaos unless we do something about it — and soon. It’s increasingly easy to fake a video showing a head of state saying something he never said, or any of us doing something we never did. Dictators and irresponsible pranksters will be able to use easily accessible AI tools to fake historic events.

All too real

Abovitz told me that, within the next 12 months, we’ll likely see high-quality AI-generated videos, photos and texts that look so real not even experts will be able to tell which ones are fake.

“I would say with a 90% confidence that it will be happening before the end of the year, and with a 70% confidence that it could happen before the end of the summer,” he told me. Already, people are using ChatGPT and other AI platforms to write columns in the style of famous authors and putting them online saying things the real authors never said.

When asked whether news media should start using content labels on their videos, pictures and texts — much like food nutrition labels inform consumers about products’ ingredients — Abovitz said yes, but that first there should be a verification of authenticity.

Photographers should start certifying that their pictures are authentic through a process on blockchain, the database that can keep a record of transactions across a network of computers. “We need truth-verification systems, likely using a mix of blockchain proofs and labeling,” Abovitz said.

He added that, just like there are government agencies that regulate food, car and aviation safety issues, there should be government bodies that regulate AI safety issues.

I fully agree with Abovitz’s assessment that “we are at war” with fake news, and that the problem will get much worse with the new AI-generated deepfakes. We may be not too far away from three kids in a garage with a few computers putting out a perfectly made fake video that could trigger a war. It’s time to do something, like requiring watermarks or content labels on videos and photos, before things get worse.

Don’t miss the “Oppenheimer Presenta” TV show on Sundays at 8 pm E.T. on CNN en Español. Twitter: @oppenheimera

Oppenheimer
Oppenheimer