AUSTIN (Nexstar) — With early voting in the Texas primary less than a month away, technology experts are warning about artificial intelligence’s role in election misinformation after several high-profile incidents involving AI.
Reports emerged two weeks ago of robocalls that used AI to imitate the voice of President Joe Biden, apparently discouraging voters from showing up to the polls ahead of the Jan. 23 primary in New Hampshire. The state attorney general has since opened an investigation, saying the calls appeared to be an illegal attempt at voter suppression.
It’s raising more questions about how AI could affect the 2024 election, prompting state legislatures across the country to propose banning or requiring disclosures for AI generated election content.
Zelly Martin — a graduate research assistant at the University of Texas at Austin’s propaganda research lab — said these recent misinformation campaigns using AI highlight the growing need for regulations of the technology.
“We really need to think critically about those concerns without overhyping and devolving into panic about the new technology that is just going to be here now,” she said. “If you aren’t restricting the technology, it is possible that it can get pretty convincing very quickly.”
The federal Cybersecurity & Infrastructure Security Agency (CISA) published a study explaining the known capabilities and uses of generative AI. The list included a wide range of malicious activity, from deepfake videos of news anchors to fake accounts used for phishing campaigns in online chatrooms.
Additionally, the use of AI-generated photos could be used to make fake social media accounts that appear realistic and reliable.
CISA also presented potential concerns directly related to elections. AI capabilities provide foreign and domestic influences to expand the scale and pervasiveness of their disinformation campaigns.
“If there are deep fakes and videos of Biden saying something that he didn’t actually say that sounds just like him, people are going to have a really hard time parsing fact and fiction,” Martin said. “Double check your sources. Look for reliable sources.”
According to the study, AI could be used to generate malware that evades detection systems and creates “convincing fake election records.” Cloning technology could be used to imitate the voices of election officials and gain access to sensitive information.
Challenges in regulating AI
In 2019, Texas became the first state to ban the creation and distribution of deepfake videos intended to influence an election or harm a candidate through Senate Bill 751.
The law incriminates videos published within the 30 days before election. It doesn’t mention deepfake photos or audio and is the only section of the Texas Election Code pertaining to AI.
Texas created the Artificial Intelligence Advisory Board last June. The council oversees how the technology is being used in state agencies and is tasked with suggesting legislative changes. Agencies are required to report information to the board by July 2024.
Congress has been increasingly having hearings and discussions about AI, but has failed to pass any legislation that would restrict the usage of the technology, or add penalties for harmful usage of it.
In 2023, six bills addressing deepfakes in campaigns and elections were introduced on Capitol Hill, two of them passed the House, but none of them ultimately made it to the president’s desk.
Martin expressed doubt that U.S. lawmakers will be capable of passing meaningful laws that would help prevent misinformation and disinformation with AI, as Congress has struggled to better regulate social media since its genesis.
“We are so behind on regulating that, because we didn’t get ahead of it when it happened,” she said. “The policy makers might not agree on how best to do it fast enough and so we’re at sort of a disadvantage there…the best solution, which isn’t a perfect solution, is probably still education.”