Will hyperrealistic AI images make fake news impossible to control?

  • Oops!
    Something went wrong.
    Please try again later.

“The 360” shows you diverse perspectives on the day’s top stories and debates.

What’s happening

Pope Francis received a wave of online praise for his fashion sense last weekend thanks to a viral photo of the 86-year-old pontiff dressed in an ultrastylish white puffy coat.

The only problem is, the image is fake. It was created using an artificial intelligence program called Midjourney, one of a handful of rapidly developing tools that allow users to, among other things, create near photorealistic images using nothing more than a well-worded text prompt.

Midjourney and other AI image creators have been around for a while, but until recently even their most convincing outputs have been relatively easy to spot as fake. However, the latest versions of these tools have become better at solving issues that have historically been telltale signs that an image isn’t real — most notably, the challenge of rendering hands that don’t look deformed or have the wrong number of fingers.

The “swagged-out pope” wasn’t the only AI-generated image to gain significant traction online recently. A series of artificial photos depicting former President Donald Trump being arrested was shared widely after news broke earlier this month that he was likely to be indicted by a New York grand jury. The same thing happened with fake images of French President Emmanuel Macron being accosted by protesters.

The images of Trump and Macron had enough flaws to make them relatively easy to spot as inauthentic for anyone who looked closely. But their existence, along with the fact that the pope photo was able to fool a lot of people, has raised a lot of concern about what it will mean if AI soon becomes powerful enough to create fake news images that are indistinguishable from reality.

Why there’s debate

As ultimately harmless as a fake image of a stylish pope is, it’s been called “the first real mass-level AI misinformation case” showing the power of artificial intelligence to create convincing depictions of events that never happened.

Many experts say it’s easy to see how the growing power of AI could cause serious societal harm when applied to more serious subjects. They argue that with misinformation already so rampant online, AI will soon become a powerful tool for those looking to manipulate the public’s understanding of reality — with fake images potentially able to create alternate histories, provoke political instability and even threaten the financial system. Some also say the knowledge that any photo might be artificial will undermine the public’s trust in real news sources and give bad actors the opportunity to claim that images showing their actual misdeeds are fabricated.

But not everyone is so worried. They say the frivolousness of the pope image is a major part of what allowed it to go viral, since those sharing it didn’t feel the need to scrutinize it in the way that they would if it depicted something important. Others argue that the public has become so accustomed to the existence of fake news that most people know to fact-check stories they see online, regardless of the medium it comes from. There’s also hope that growing concern about AI will lead the companies behind the image creators and social media firms to establish guardrails to protect the public, or even inspire Congress to pass new laws.

What’s next

Misinformation is far from the only area of concern about rapid developments in AI. There are also major fears about how advanced language models may become too sophisticated to control, how AI-generated art might threaten the livelihoods of real-life artists, and the dangers of deepfakes that allow anyone to superimpose someone’s face and voice into any video.

Perspectives

Even if most AI images are harmless or easily spotted, it only takes one to cause real damage

“As widely available AI image generators rapidly become more sophisticated, their creations might outpace our ability to adjust to a flood of believable but completely false images. … While many of these creations are harmless, it’s not difficult to imagine how synthetic images might manipulate public knowledge of current or historical events.” — Amanda Silberling, TechCrunch

People put more scrutiny on news that matters

“Pope Francis’s rad parka fooled savvy viewers because it depicted what would have been a low-stakes news event. … The Trump-arrest images, in contrast, depicted an anticipated news event that, had it actually happened, would have had serious political and cultural repercussions. One does not simply keep scrolling along after watching the former president get tackled to the ground.” — Charlie Warzel, Atlantic

AI will make us all more skeptical of real news as well

“I already find myself looking at real photos of politicians on social media, half wondering if they are fake. AI tools will make skeptics of many of us. For those more easily persuaded, they could spearhead a new misinformation crisis.” — Parmy Olson, Bloomberg

AI images are part of a massive experiment tech companies are conducting on the public with zero rules in place

“I think this is an example of a wider problem of technologies being pushed into our societies without any oversight, regulation or standards. … Most of our society has been left behind, not understanding how these technologies work, for what purposes and what are the consequences of that.” — Elinor Carmi, data literacy researcher, to New Scientist

The public is already primed to treat everything it sees online with skepticism

“Should we be scared of a new flood of AI misinformation making it impossible to separate fact from fiction online? Not really. … We’re still some time from a large-scale, harmful AI misinformation event. Something that measurably disrupts real life. Fact-checkers and journalists, in my opinion, are ready for such a scenario.” — Alex Mahadevan, Poynter

It’s been possible to make fake photos for years, but the efficiency of AI is a game changer

“There are now very clear incentives and benefits for duping people with AI-generated images. … Which has been true of non-AI misinfo for years, of course. Posting a photoshop of a shark on a highway during a hurricane can achieve the same kind of monetizable chaos. But now you can generate an infinite amount of variations of the shark swimming up an infinite amount of highways. And you can do it in a couple seconds.” — Ryan Broderick, tech industry writer

AI images empower conspiracists to reject the truth

“Advances in generative AI will soon mean that fake but visually convincing content will proliferate online, leading to an even messier information ecosystem. A secondary consequence is that detractors will be able to easily dismiss as fake actual video evidence of everything from police violence and human rights violations to a world leader burning top-secret documents.” — Hany Farid, The Conversation

There is no real way to hold bad actors accountable

“Who is ultimately responsible for the consequences of fake images like these? As AI technology rapidly outpaces regulation, culpability for fake images is still a gray area and largely left up to the discretion of private companies.” — Diego Lasarte, Quartz

Everything depends on whether safeguards are developed before AI becomes truly indistinguishable from reality

“The race is on to come up with some kind of solution to this problem before AI-generated images get good enough for it to be one. We don’t yet know who will win, but we have a pretty good idea of what we stand to lose.” — Sara Morrison, Vox

With the right rules and preparation, AI images can do enormous good for society

“If — alongside AI education and legislation — we look towards solving local issues with an eye on worldwide ones, the rise of this sort of technology can happen without turning society into some sort of mush. Hell, maybe AI generators of all kinds could be used for fun and harmless activities. But if we allow this technology to drag us into full hyperreality without taking appropriate precautions, then who knows what might happen — but I don’t have high hopes of it being good.” — Callum Booth, Next Web

AI is still far better at creating joy than spreading misinformation

“Yes, suddenly it seems all too obvious how artificial intelligence could easily be used to create propaganda, how it could easily be weaponized as a tool of destabilization. But, that said: The Pope Coat Incident makes clear that AI can and will also be used for the equivalent of making hyper-realistic cartoons.” — Ashley Fetters Maloy and Anne Branigin, Washington Post

Is there a topic you’d like to see covered in “The 360”? Send your suggestions to the360@yahoonews.com.

Photo illustration: Jack Forbes/Yahoo News; photos: @SaffhoArtSht/Midjourney via Twitter (2), Pablo Xavier/Midjourney via Reddit