How AI might influence democracy in 2024

 Electoral ballot spoiled with a broken pencil.
Electoral ballot spoiled with a broken pencil.

Google will restrict its artificial intelligence chatbot in the run-up to the US election next year in an "abundance of caution" amid growing fears of disinformation and threats to democracy.

The tech giant plans to label any AI-generated content on its platforms, including YouTube, and specify where political ads have used digitally altered material. "Like any emerging technology, AI presents new opportunities as well as challenges," the company said in a statement. "But we are also preparing for how it can change the misinformation landscape."

It came as former justice secretary Robert Buckland has warned that the UK is not ready for a deepfake general election. The Tory MP is urging the government to do more to tackle what he sees as a "clear and present danger" to democracy, warning that realistic audio and video clips of politicians appearing to say things they did not say could be increasingly used. "The future is here," he said. "It's happening."

How might AI influence elections?

Leaders and experts gathered at the UK's Bletchley Park in November for the world's first AI safety summit, with the UK, EU and US all setting wheels in motion for AI regulation and legislation. The UK's Government Office for Science released an accompanying report warning that generative AI could be used to mount "mass disinformation" by 2030. It could lead to the "erosion of trust in information", with "hyper-realistic bots" and "deepfakes" muddying the waters, said the report.

"Next year is being labelled the 'Year of Democracy'," said Marietje Schaake in the Financial Times, with key elections scheduled to take place in the UK, US, EU, India, Taiwan, Indonesia and potentially Ukraine. AI is "one of the wild cards that may well play a decisive role" in the votes, wrote Schaake, policy director at Stanford University's Cyber Policy Center.

Generative AI, "which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams". While much of generative AI's impact on elections is still being studied, "what is known does not reassure".

Truth "has long been a casualty of war and political campaigns", said journalist Helen Fitzwilliam in a piece for the Chatham House think tank, but now there is "a new weapon in the political disinformation arsenal". Generative AI tools can "in an instant clone a candidate's voice, create a fake film or churn out bogus narratives to undermine the opposition's messaging", wrote Fitzwilliam. "This is already happening in the US."

Taiwan's voters, who will choose the successor to President Tsai Ing-wen in January, are "expected to be the target of China's formidable army of about 100,000 hackers". About 75% of Taiwanese receive news and information through social media, so "the online sphere is a key battleground". AI can act as "a force multiplier, meaning the same number of trolls can wreak more havoc than in the past".

Days before the Slovakian election, fake audio recordings of Michal Šimečka, the leader of the Progressive Slovakia Party, were shared online, in which he was heard discussing plans to rig the ballot, said Politics Home's "The House" magazine. A similar occurrence with a fake audio clip of Labour leader Keir Starmer moved Conservative MP Simon Clarke to brand generative AI as "a new threat to democracy", said Tom Phillips, former editor of fact-checking organisation Full Fact. Although threats of disinformation and hoaxes aren't new, AI "lets you do it far quicker, far cheaper and at an unprecedented scale".

AI could also use automation to "dramatically increase the scale and potentially the effectiveness of behaviour manipulation and microtargeting techniques that political campaigns have used since the early 2000s", said political scientist Archon Fung and legal scholar Lawrence Lessig on The Conversation. Just as advertisers use browsing and social media history to target ads, an AI machine could pay attention to hundreds of millions of voters – individually.

What can be done?

"It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI," said Fung and Lessig. "We believe that is unlikely." However, enhanced privacy protection would help, they wrote, as would election commissions.

Other possible steps to mitigate the threat include independent audits for bias, research into disinformation efforts and the study of elections that have taken place this year, noted Schaake, including in Poland and Egypt.

This month the EU reached a provisional deal on the Artificial Intelligence Act, agreeing to ensure that AI "respects fundamental rights and democracy". The EU's AI Act, due to be finalised before the European Parliament elections in June next year, would classify AI systems by level of risk and regulate depending on each category. The White House has also issued an executive order on secure and trustworthy AI and a blueprint for an AI Bill of Rights.

Ultimately, there are "reasons to believe AI is not about to wreck humanity's 2,500-year-old experiment with democracy", said The Economist. Although it is important to be mindful of the potential of AI to disrupt democracies, "panic is unwarranted".