‘A tsunami of disinformation’: OpenAI moves to blunt AI-driven election meddling

Jim Lamont votes during Utah’s municipal and primary elections at the Salt Lake County Government Center in Salt Lake City on Sept. 5, 2023.
Jim Lamont votes during Utah’s municipal and primary elections at the Salt Lake County Government Center in Salt Lake City on Sept. 5, 2023. | Jeffrey D. Allred, Deseret News

Last year, months ahead of the 2024 election cycle moving into full swing, some chilling previews of the power of artificial intelligence to create false imagery in the name of political agendas generated plenty of buzz across social media platforms and news outlets.

After President Joe Biden announced his plans last April to run for a second term, the Republican National Committee released a concocted futurescape in an ad depicting a slew of dystopian outcomes, including a carousel of AI-generated images of mayhem, on the heels of Biden winning reelection.

And a month before that, a series of AI-created deep fake photographs showed former President Donald Trump resisting arrest, attempting to run away and eventually being forced to the ground and dragged off by a group of New York City police officers. Like the images used by the RNC in its spoof ad, all of the images were completely false and generated by AI-powered digital tools.

While both the RNC and the individual who created the fake Trump pics openly acknowledged their work was manufactured, worries abound that AI platforms that have become incredibly proficient at generating false text, photos, video footage and audio clips will be put to their worst use in trying to tip the scales in upcoming elections in the U.S. and around the world.

Related

This week, OpenAI, the company behind the AI chatbot ChatGPT and artificial intelligence-driven text-to-image generator DALL-E, announced their plans to help prevent their products — among the most popular and widely used AI tools in the world — from being leveraged in disinformation campaigns.

“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” OpenAI wrote in a blog posting earlier this week.

To that end, OpenAI says new and ongoing guardrails built into its products are aiming to ban or limit the ways they can be used to generate misinformation. Those parameters, according to the company, include:

  • Not allowing use of the tools to build applications for political campaigning and lobbying.

  • Not allowing programmers to create chatbots, using OpenAI technology, that pretend to be real people (e.g., candidates) or institutions (e.g., local government).

  • Not allowing use of OpenAI technology in applications that deter people from participation in democratic processes. For example, misrepresenting voting processes and qualifications (e.g., when, where or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless).

  • Screening protocols built into DALL-E that reject requests for image generation of real people, including political candidates.

OpenAI also announced it will soon institute a new digital “watermarking” process for generated images that are produced by its tools. The permanent imprint allows images to be identified as AI creations wherever they appear.

OpenAI is only one of dozens of emerging innovation companies that have built or are building AI-driven tools that have made the creation of false images, video and/or audio accessible to anyone with a computer, tablet or smartphone.

And that’s a point Darrell West, senior fellow for the Brookings Institution’s Center for Technology Innovation, made in his TechTank podcast last November.

“Through prompts and templates, basically anybody can generate fake videos, fake press releases, fake news stories or other types of false narratives,” West said. “I am predicting a tsunami of disinformation in the 2024 campaigns through fake videos and audio tapes.”

In a Brookings report published last May, West noted that an expected tight race in the 2024 U.S. presidential election exposes opportunities for disinformation campaigns to be waged on the small group of swing voters who are likely to decide the contest.

“Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, critical race theory, transgender issues or the Ukraine war,” West wrote. “It can also create messages that take advantage of social and political discontent, and use AI as a major engagement and persuasion tool.”

West also shared his concerns that an essential lack of regulation and oversight when it comes to AI-generated content will only exacerbate the use of AI-created misinformation or disinformation in the upcoming election cycle.

“What makes the coming year particularly worrisome is the lack of guardrails or disclosure requirements that protect voters against fake news, disinformation or false narratives,” West wrote.

And while the RNC and creator of the fake Trump arrest photos owned their use of AI tools, West said there is little reason to believe that practice will become the norm.

“It is more likely that people will use new content tools without any public disclosure and it will be impossible for voters to distinguish real from fake appeals,” West wrote.