‘A bunch of malarkey’: New Hampshire AG investigating AI-generated robocalls in Biden’s voice

President Joe Biden arrives at the White House on Monday, Jan. 22, 2024, after returning from Rehoboth Beach, Del.
President Joe Biden arrives at the White House on Monday, Jan. 22, 2024, after returning from Rehoboth Beach, Del. | Andrew Harnik, Associated Press
  • Oops!
    Something went wrong.
    Please try again later.

Less than 48 hours before polls were set to open for Tuesday’s New Hampshire primary, reports began surfacing about robocalls made in the voice of President Joe Biden, encouraging residents of the state to skip voting in the nation’s first presidential primary of the 2024 election season.

Turns out, however, that the fake message was generated by an artificial intelligence tool and the New Hampshire attorney general has launched an investigation into the source of the calls.

While it’s unclear how many of the deepfake calls were made, The Associated Press reviewed a recording of the call and reported it uses a generated voice similar to Biden’s and employs his often-used phrase, “What a bunch of malarkey.” It then tells the listener to “save your vote for the November election.”

“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the voice mimicking Biden says. “Your vote makes a difference in November, not this Tuesday.”

Related

On Monday, New Hampshire Attorney General John Formella announced his office had received complaints about the calls and that members of his election law unit were investigating the matter. Biden does not appear on the New Hampshire primary ballot, and he has not campaigned in the state, but some Democratic groups have advocated for him as a write-in candidate.

Formella’s office directed recipients of the calls to disregard the content and clarified that the source of the call, which appeared to have come from Kathy Sullivan, a former state Democratic Party chair who helps run Granite for America, a super PAC supporting the Biden write-in campaign, had been “spoofed”.

“These messages appear to be an unlawful attempt to disrupt the New Hampshire presidential primary election and to suppress New Hampshire voters,” Formella’s office said in a Monday press release. “New Hampshire voters should disregard the content of this message entirely. Voting in the New Hampshire presidential primary election does not preclude a voter from additionally voting in the November general election.”

Sullivan said she alerted law enforcement and filed a complaint with the attorney general after multiple voters in the state reported receiving the call Sunday night, per AP.

“This call links back to my personal cellphone number without my permission,” Sullivan said in a statement. “It is outright election interference, and clearly an attempt to harass me and other New Hampshire voters who are planning to write in Joe Biden on Tuesday.”

Generative AI deepfakes already have appeared in campaign ads in the 2024 election and, according to The Associated Press, the technology has been misused to spread misinformation in multiple elections across the globe over the past year, from Slovakia to Indonesia and Taiwan.

“We have been concerned that generative AI would be weaponized in the upcoming election and we are seeing what is surely a sign of things to come,” said Hany Farid, an expert in digital forensics at the University of California, Berkeley, who reviewed the call recording and confirmed to AP it is a relatively low-quality AI fake.

Earlier this month, OpenAI, the company behind the AI chatbot ChatGPT and artificial intelligence-driven text-to-image generator DALL-E, announced plans to help prevent its products — among the most popular and widely used AI tools in the world — from being leveraged in disinformation campaigns.

“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” OpenAI wrote in a blog post.

But OpenAI is only one of dozens of emerging innovation companies that have built or are building AI-driven tools that have made the creation of false images, video and/or audio accessible to anyone with a computer, tablet or smartphone.

And that’s a point Darrell West, senior fellow for the Brookings Institution’s Center for Technology Innovation, made in his TechTank podcast last November.

“Through prompts and templates, basically anybody can generate fake videos, fake press releases, fake news stories or other types of false narratives,” West said. “I am predicting a tsunami of disinformation in the 2024 campaigns through fake videos and audio tapes.”

In a Brookings report published last May, West noted that an expected tight race in the 2024 U.S. presidential election exposes opportunities for disinformation campaigns to be waged on the small group of swing voters who are likely to decide the contest.

“Generative AI can develop messages aimed at those upset with immigration, the economy, abortion policy, critical race theory, transgender issues or the Ukraine war,” West wrote. “It can also create messages that take advantage of social and political discontent, and use AI as a major engagement and persuasion tool.”