AI Deepfakes Are Making War in Ukraine Even More Chaotic and Confusing

Photo Illustration by Elizabeth Brockway/The Daily Beast/Getty
Photo Illustration by Elizabeth Brockway/The Daily Beast/Getty
  • Oops!
    Something went wrong.
    Please try again later.

One of the biggest examples of fake news might have happened well before the internet age. The story goes that newspaper magnate William Randolph Hearst once sent a photographer to Cuba in order to cover a rebellion against Spanish rule in 1897. However, upon arrival in the country, the photographer sent a telegram back to Hearst telling him that there was no war to report—to which Hearst replied, “You furnish the pictures and I’ll furnish the war.”

True to his word, the newspaper magnate drummed up public support for the Cuban rebels with his sensationalized reporting of the conflict, eventually leading to the Spanish-American War.

While historians largely agree that the story likely never happened, the truth remains that the images and videos captured during times of war have the ability to change the very face of a conflict. The famous Napalm Girl image helped turn American sentiment against the Vietnam War. Pictures and videos of the concentration camps in Europe brought the Nazis’ atrocities out into the daylight.

However, with technologies like artificial intelligence and deepfakes proliferating more than ever, doctored or misleading images pose a profound challenge in fighting misinformation in times of war—something that has come into stark relief during the Russo-Ukrainian War.

Turns Out Deepfakes Are Damn Good at Making Us Invent False Memories

Since Moscow’s invasion began in 2022, we’ve seen fake videos of Ukrainian President Volodymyr Zelensky surrendering to Russia, Russian President Vladimir Putin announcing a peace deal with Ukraine, and even video game footage being used to claim that a single Ukrainian fighter pilot was taking down Russian aircraft with deadly precision.

“Deepfakes loomed over social media during the invasion,” John Twomey, a PhD student in applied psychology at the University College Cork in Ireland, told The Daily Beast in an email. “The Ukraine government had been warning that they would be used in cyberwarfare for weeks before the infamous Zelensky deepfake happened. This was clearly something governments, media, and people had been aware was a possibility.”

Twomey co-authored a paper published Wednesday in the journal PLOS ONE looking at deepfakes related to the Russo-Ukrainian War and their impact on public perception of conflict. The study found that not only did the AI-generated videos create confusion and concern among the public and news media, they also contributed to the erosion of trust that users had in videos they saw coming from the war—regardless of whether or not they were true. The paper is the first of its kind to look at the impact that such AI is having on war.

The authors analyzed nearly 5,000 tweets in English, German, and French from the first seven months of 2022. They noted that the Russo-Ukrainian War marked the beginning of AI-generated videos being used during wartime—offering an incredibly rare and novel opportunity to study its effects on misinformation.

Inside the Underground World of Black Market AI Chatbots

The conflict spawned “some of the most prominent examples of deepfake disinformation yet,” Twomey said, but his team found that “people employ healthy skepticism and prepared for the likelihood of deepfake media.”

For example, the AI deepfake of Zelensky surrendering to Putin was swiftly debunked by the Ukrainian government and news media, as have most others.

“The Zelensky video was really shittily done,” Todd Helmus, a senior behavioral scientist at the RAND Corporation and expert on AI disinformation, told The Daily Beast. “Nobody in Ukraine believed it and it didn’t raise any concern in Ukraine.”

However, he added that the video was “important” as it was one of the first big “national security-focused deepfake videos that was intended to fool people” and the technology, while still in its early stages and rough around the edges, will eventually be refined, more convincing, and potentially disastrous.

And with new conflict emerging, as evidenced by Hamas’ attack on Israel and the subsequent Israeli bombing campaign of Gaza, the technology holds greater potential than ever to spread misinformation and cause real-world harm.

We Need to Stop Freaking Out About AI Deepfakes

“Certainly the use of deepfakes during the Russian invasion of Ukraine highlights that indeed deepfake media has the potential to play a role in any conflict,” Twomey said. “But how much of an impact that a deepfake may have? The jury is still out.”

The study’s authors added that their findings underscore a great need for deepfake literacy in the media and broader public to better identify what is true and what is simply a bot-made video.

However, this is a double-edged sword, Twomey said. Perhaps the greatest impact AI deepfakes have in the immediate future is in the erosion of public trust in any media that comes out of these conflicts. The more the public is aware of these deepfakes, the less they’re likely to trust any image or video.

“Healthy skepticism is important,” Twomey said. “Ideally people are going to be able to balance an awareness of deepfakes with the knowledge that they aren’t very prevalent as it is. And to recognize the harms which come with accusing real media of being fake. People need to be aware that sometimes figuring out what is actually true can take time and not just rush to be the first person to call something fake.”

This is ultimately a good thing, though, according to Helmus. People need to be able to adapt to technological changes. He likened it to the Orson Welles’ infamous radio broadcast of War of the Worlds that caused mass panic throughout the U.S. after listeners believed it to be real. Eventually, the public learned to take radio stories with a grain of salt. The same is true of AI deepfakes and social media. Rather the public have an immediate sense of skepticism when it comes to a video than believe a deepfake of Zelenksy surrendering to Putin.

In the meantime, policymakers can begin to employ regulations surrounding the technology behind these deepfakes in order to make them more easily identifiable as fakes. This could mean requiring watermarks in every AI-generated video or including devices on cameras that can verify that real images and videos are indeed real.

“I don’t think we have to be deeply concerned at this point about the future of civilization,” Helmus said. “I believe there’s reason to believe that the world will adjust to this and we won’t become a dystopian society. The world’s not ending yet. Maybe it will. But it hasn’t ended yet.”

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.