As the cradle of tech, California looks to be leader in AI regulation

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

California lawmakers returned to the Capitol in January with their minds on an issue that just a few years ago was more science fiction than reality for the average constituent—Artificial Intelligence.

Legislators introduced a raft of bills over the past year attempting to get a handle on the technology as it hurtles into daily life, with new proposals announced at a breakneck-speed in an attempt to regulate AI statewide. Their efforts join a general thrust by numerous state governments and federal agencies, as a widening chorus of experts warn of AI's potential for harm if left unchecked.

"For five to ten years now, people's lives are in every way imaginable touched by algorithmic decision-making," said leading AI policy expert Suresh Venkatasubramanian. "You can pick whichever sector you want: health care, employment, credit, housing, criminal justice, education—every sector."

But the realm in which AI is operating that has lawmakers and experts particularly concerned is elections, and not without reason. In the first few months of the 2024 primary season, voters in several states have been exposed to AI-generated misinformation in various forms. Generative AI, which has the ability to create realistic images, videos, audio and other content, often called "deepfakes," is attracting the most attention.

Florida Gov. Ron DeSantis last summer was roundly criticized for his presidential campaign's use of AI-manipulated photos showing former President Donald Trump hugging and kissing Dr. Anthony Fauci, whose work leading the nation through the pandemic invoked virulent criticism by Trump and DeSantis.

More recently, New Hampshire Democrats reported robocalls in which President Joe Biden seemed to urge voters to stay home during primaries. Both are among the most high-profile instances of deepfakes in the presidential primary elections so far, launching federal and state investigations and underlining how AI is used in attempts to confuse, suppress or manipulate voters.

Governments and academics aren't the only ones worried.

A November 2023 poll from University of California's Institute of Governmental Studies—one of the most well-regarded polling operations in the state—found Californians have started paying attention to the convergence of this new technology on 2024 elections. The survey found 84% of residents are concerned about the dangers that disinformation, deepfakes and artificial intelligence pose in next year’s elections, and 73% agree the state government has a responsibility to act to protect voters.

In California specifically, the cradle of the technology sector, lawmakers and researchers see themselves as potential vanguards in both developing and regulating AI. The state considers itself as the world leader in generative AI innovation, home to 35 of the world’s top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally, according to a statement from Gov. Gavin Newsom's office.

Jonathan Mehta Stein is a co-founder of The California Initiative for Technology and Democracy, a project of good government group California Common Cause, which has been advising legislators on the threats emerging technologies pose to democracy. He points to the growing use of AI in elections across the world as evidence it's no longer a theoretical, but an active practice. In the first month of 2024, deepfakes promulgating misinformation in Bangladeshi and Slovakian elections proved to be significant election disruptions. Here in the United States, AI-generated content is met with considerable concern on the heels of rising political violence and distrust in election processes.

"All of these new technologies that can deceive voters and undermine elections are coming on the heels of other depressing trends," Stein said. "In our democracy, trust in institutions and in the media are all-time lows. Beliefs that our elections are being run securely and votes are counted accurately are in doubt among huge percentages of the American population."

A January USA TODAY/Suffolk University Poll found 52% of Trump supporters said they had no confidence the 2024 elections will be accurately counted and reported. In contrast, 81% of supporters of Biden were "very confident" about this year's election returns.

More: Ahead of Jan. 6, poll flashes warning signs about 2024 election aftermath

California bills that take on Artificial Intelligence

In California, more than a dozen Assembly bills have been proposed over the last year pertaining to AI, from protecting copyrights and creative material to attempting to address discrimination in automated decision systems. Newsom signed an executive order September outlining the administration's framework on generative AI technology, modeled in part by the White House’s Blueprint for an AI Bill of Rights. It directs state agencies to evaluate potential AI-fueled risks, evaluate current uses and potential impacts of AI and encourage state employee training.

  • State Sen. Scott Wiener, D-San Francisco, is among the latest to introduce a bill on the matter, Senate Bill 1047. It proposes the creation of a public computing program for developers to use when testing AI for safety.

  • State Sen. Bill Dodd, D-Napa, proposed the California AI Accountability Act early January. The Senate Bill 896 is among the most recent to hit the rotunda's floors, following up on Dodd's Senate Concurrent Resolution 17 adopted last year, which codified the state's pledge to pursue regulations on AI use. One of the new bill's provisions includes a requirement that state agencies notify users when they are interacting with AI.

  • Also in January, State Sen. Becker, D-Menlo Park, introduced Senate Bill 942, which would allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence.

  • Assemblymember Marc Berman, D-Palo Alto proposed a bill December aiming to update the state’s penal code to criminalize the production, distribution or possession of AI-generated content of child sexual abuse.

Several other lawmakers have jumped into the regulation fight, however, with varying success. More bills are sitting in the early introduction stages or have been abandoned than are currently on the books, though this year's legislative session has only just begun.

Assemblymember Gail Pellerin, D-Santa Cruz, is leading an effort to ban the use of generative AI, such as manipulated photos and deepfakes, in all California political communications. After serving as Santa Cruz's County Clerk for more than 25 years, Pellerin is especially attuned to issues of voter access and misinformation.

"Now we have AI putting out information that seems real," Pellerin said. "You could have an (AI-generated) elections official announcing that voting locations have changed or they're going to have to postpone election day, which would cause extreme chaos and confusion."

Though the focus on regulation is triumphed by researchers and experts in the field, AI is capable of being used positively. Stein says that AI's potential benefits to industry are often at the forefront because of a profit motive, but doesn't count out ways the technology can be used not only outside of business realms, but in the very structures regulations hopes to protect from bad actors.

"AI can be used by challengers who have less resources than incumbents to figure out how to most efficiently target voters, or it can be used by small community organizations hoping to do valuable work to disseminate their message more effectively." Stein said. He even imagines a future when elections officials themselves harness the technology to find new efficiencies in administering elections.

However, in the short term, legislators and elections-watchers in California have their eyes trained on the next few months, as campaigning season kicks into overdrive and as voters prepare for March 5 primaries.

"It keeps me up at night," Pellerin said. "But, I do feel like California has really been a trailblazer in this area."

Kathryn Palmer is the California 2024 Elections Fellow for USA TODAY. Reach her at kapalmer@gannett.com and follow her on X @KathrynPlmr.

This article originally appeared on Palm Springs Desert Sun: How does California regulate AI?