Deceptive AI campaign ads could target Wisconsin. Lawmakers have a plan to fight them.

MADISON — As the 2024 election cycle ramps up, you might have already seen or heard political campaign advertisements that contained artificial intelligence.

Or, a more alarming prospect: You have no idea if you did.

Before the New Hampshire primary, a phone call mimicking President Joe Biden's voice told thousands of voters to skip the polls. A pro-Ron DeSantis ad featured former President Donald Trump's voice reading a post he wrote on Truth Social, but he didn't speak those words in real life. And last spring, Republicans used AI to create an ad depicting a dystopian future under a second Biden term.

Those examples were cited by lawmakers, experts and advocacy groups interviewed by the Milwaukee Journal Sentinel, though no one could recall obvious uses of AI in ads aired in Wisconsin. But the battleground state could become a prime target in the immediate days before elections, with little time for remedy.

There have been early efforts to rein in artificial intelligence in politics before the 2024 elections. The Federal Communications Commission just ruled robocalls with AI-generated voices are illegal. And the Federal Election Commission has started a process to regulate "deliberately deceptive" campaign ads.

But the federal government is "going very slow," said Craig Holman, government affairs lobbyist for Public Citizen, a consumer advocacy group that asked FEC for those rules. "We really are counting on the states to step up to the plate."

More: New Congressional task force could regulate use of AI but not focused on 2024 election

Five states, including Minnesota and Michigan, already have laws on the books that regulate AI in campaign ads, according to the National Conference of State Legislatures.

A bill that has been moving swiftly through the state Legislature would add Wisconsin to the list.

"It's important going into election season that people are able to believe and trust what they see and hear through campaign ads and things online," said Rep. Adam Neylon, R-Pewaukee, an author of the bill.

Two public hearings were held in the last two months — at the first, bill author Rep. Clinton Anderson, D-Beloit, provided testimony generated by AI to show the sophistication of tools like ChatGPT. The bill passed the full Assembly last week and awaits a vote in the Senate. Democratic Gov. Tony Evers says the bill has his support.

What would the Wisconsin AI campaign ads bill do?

The Wisconsin bill would require campaign ads that contain synthetic media — "audio or video content substantially produced by means of generative artificial intelligence" — to include a disclaimer.

The words "contains content generated by AI" would be spoken at the beginning and end of radio ads. Video ads would include readable writing that audio or video content was generated by AI during the portion that contains it, at the beginning and end.

Unlike Texas and Minnesota, which essentially ban AI-generated ads in the lead-up to elections, the Wisconsin bill doesn't include a timeline or blanket restriction. Lawmakers were concerned that could invite legal challenges based on free speech.

"At some point, that discussion needs to be had, and that needs to be settled in court," Neylon said. "Because I would argue, personally, that you don't have the freedom to go spread false and misleading information about people."

Can a TV station stop an AI-generated ad from airing?

The bill includes a $1,000 fine for each violation, though it was amended to exclude broadcasters like television and radio stations from liability.

The FCC requires stations to run candidates' ads regardless of their content. Broadcasters have to check political ads for certain things before they air, such as quality control for volume and a disclaimer of who paid for the ad.

"Those things obviously take time, and there's no process or anything that we could use to try to vet if it was created by AI," Anna Engelhart, general manager of WKOW-TV in Madison and secretary of the Wisconsin Broadcasters Association, said at the second hearing.

What will and won't count as AI in political ads?

The bill doesn't specify what uses and types of AI are in and out of bounds, and lawmakers say they'll likely return to that question as AI continues to advance.

"There has not been a line in terms of what modifications are okay," said Dietram Scheufele, who studies misinformation and social media at the University of Wisconsin-Madison. Public opinion about what's acceptable in altering content has changed, such as editing photos of ourselves on Instagram or LinkedIn, he said.

Campaign ads have long used grainy or black-and-white photos to portray opponents, Scheufele noted. In the 2022 Wisconsin Senate race, supporters of Lt. Gov. Mandela Barnes criticized a mailer produced by the state Republican Party that used a filter to darken one side of the flier, including a picture of Barnes, who is Black.

Holman wants to see disclosure for deepfakes — a deceptive visual that combines multiple images or videos to show a candidate doing or saying something they did not. He thinks AI could become so common that every ad could use it to enhance scenery, for example. Neylon noted some elements might not be visible, like using AI to write the script.

"It's really hard for the government to step in and draw lines between what is an appropriate use of AI or what is not," Sen. Mark Spreitzer, D-Beloit, said at a hearing. Spreitzer, also an author of the bill, said a disclaimer will alert the public, who can make their own determination.

It's also difficult to determine whether satirical ads are harmful and would impact an election, Neylon added, citing a deepfake video of DeSantis as Michael Scott from "The Office." A similar bill in Congress led by Sen. Amy Klobuchar from Minnesota makes exceptions for satire and parody to comply with the First Amendment.

Who would enforce and respond to AI campaign ads in Wisconsin?

The bill allows the Wisconsin Ethics Commission to create exceptions, though the commission didn't respond to questions about what they would consider. Neylon said the commission would have some flexibility to create rules as AI capabilities change.

Rep. Donna Rozar, R-Marshfield, proposed adding up to six months in jail to strengthen the penalty, but removed it because it would significantly change the bill by raising the issue to district attorneys. DAs can also get involved for violations of a state law that prohibits publishing "a false representation pertaining to a candidate or referendum which is intended or tends to affect voting at an election."

That statute, 12.05, was also the basis of a Public Citizen petition to the Wisconsin Elections Commission, which asked them to clarify that the law applies to "deliberately deceptive" AI content in campaign communications. The petition will likely be placed on a future meeting agenda for discussion, a spokesman said, and new rulemaking requires a two-thirds vote of the commission.

State attorneys general will also play a role in addressing AI-generated robocalls — the FCC ruling gives them new tools to "go after bad actors." Attorney General Josh Kaul said the FCC and New Hampshire AG took quick action on the robocall. He expects the federal government and state Department of Justice would coordinate if something similar happened in Wisconsin.

"If there are efforts to mislead voters or to put out disinformation to try to disenfranchise Wisconsinites, we're going to pursue whatever legal avenues are available to ensure that people's right to vote is protected," Kaul said in an interview. "We need to make sure that folks around the state are prepared to take action against it if necessary. I can tell you that at DOJ, we will be."

Who would create AI ads, and who would they affect?

Experts think outside groups and foreign adversaries are more likely to push deepfake ads, rather than candidates themselves. It could be especially difficult to track down those groups close to an election — Holman thinks candidates should be allowed to get a quick court injunction to put an ad on hold.

More: How generative AI could help foreign adversaries influence U.S. elections

"If some deepfake comes out of Biden falling down repeatedly right before the election in key states, and it all turns out to be fake five days later, that's completely irrelevant," Scheufele said. "We don't have video-assisted review like we have in in football, which means the game will have ended and the result will stand."

Nick Ramos, executive director of the Wisconsin Democracy Campaign, said it's become harder for his organization to track dark money groups and see where they're putting money into ads. He sees the bill as a good start but hopes lawmakers will look closer at money in politics.

"To an average person, if they're going to be running an AI or deepfake ad, a $1,000 (fine) might really hurt them," Ramos said. But it's a "drop in a bucket" for groups that are "flooding millions of dollars to try and win elections."

Deepfake ads could target both parties — likely one reason the bill has bipartisan support. In Congress, Wisconsin's Democratic U.S. Tammy Baldwin and Rep. Mark Pocan signed onto a letter supporting FEC regulation that said AI could "significantly disrupt the integrity of our elections." Republican Rep. Bryan Steil said he wants to ensure existing laws that could apply to AI are being enforced, but is open to additional regulation if it's needed.

"I think there's always room to consider additional potential disclosures to provide additional transparency to the American people," Steil said in an interview. "The new technologies of AI are really just a continuation of some of the challenges that we've seen in political ads."

How else are Wisconsin lawmakers responding to AI?

Lawmakers and advocates are both cautious and optimistic about AI. Some have called it a "wild west" and worry it's "moving so fast." Others highlight benefits, like helping catch diseases in health care settings.

Legislators have been considering how to put guardrails around AI while making sure innovation is still encouraged. That was the directive of a bipartisan task force, led by Republican Rep. Nate Gustafson of Fox Crossing.

"AI is going to be absolutely fundamental to everything we do going forward, but it shouldn't be this scary thing," Gustafson said. "Because it's so unknown, (lawmakers) are not quite ready to embrace it. That's standard with technology."

One bill that came out of the task force would require state agencies to report their use of AI and propose staff reductions. Democrats oppose that bill, though Gustafson said the intention was to see if AI can fill gaps in vacant positions. Steil also led a hearing on how the legislative branch could use AI.

More: Taylor Swift, deep fakes, free speech and the push in Tennessee to regulate AI

Other bills would create criminal penalties for using AI to generate child or revenge pornography. Another bill would give consumers more rights over their personal data.

Scheufele predicts that data will be used in "unbelievably sophisticated" ways to impact voters, such as tailoring political messaging to our emotions based on what our devices know about how we type and talk. Even so, AI "doesn't have to be sophisticated to be effective," he said. "It needs to hit a nerve."

How can you tell if an ad was created with AI?

What can you do this campaign season to figure out if an ad is real or generated by artificial intelligence?

If you're scrolling on X (formerly Twitter), you might come across community notes on posts that users think contain misleading information. Scheufele compares that to a smoke alarm or carbon monoxide detector — it's a "pretty good first alert" something is wrong, but the tool can be hijacked.

More: Don't fall for artificial intelligence deepfakes: Here's how to spot them

He recommends starting to Google terms that describe the video clip or image, such as a candidate standing in front of a specific location. Go to as many different sources as possible, and you can make a list of trusted experts to return to.

Holman points out AI technology was not always as advanced, and voters could tell whether an ad was fake. Now, they can look real enough that voters can't tell fact from fiction.

"If elections are going to be decided by deepfakes or misinformation, voters are going to lose confidence that elections are valid," he said. "And once that happens, we risk losing democracy."

This article originally appeared on Milwaukee Journal Sentinel: Wisconsin could be hit with AI campaign ads. Lawmakers plan to fight.