How will artificial intelligence affect California state workers? New report on AI has clues

Get The State Worker Bee newsletter in your inbox every Wednesday

This is a preview of our weekly state worker newsletter. Subscribers receive more exclusive tidbits like this one, as well as a weekly roundup of all our state worker coverage. Sign up using the form linked here, or by emailing mmiller@sacbee.com.

Like it or not, artificial intelligence has established itself as a fixture of the 21st century workplace. The state of California – home to many of the world’s top AI companies – is no exception.

A new report from the Government Operations Agency examines the potential benefits and risks of the state using generative AI – a powerful tool that can create original content based on large inputs of data – in its daily operations.

The report comes in response to Gov. Gavin Newsom’s September executive order that instructed state agencies to brainstorm and develop a plan for how to “ethically and responsibly” deploy AI technology in government operations.

The report does not specifically address how or whether the introduction of generative AI would displace state employees in certain classifications.

State employees are expected to follow common-sense safety and privacy practices, such as paraphrasing AI-generated content rather than using it verbatim and never sharing Californians’ resident data to free, publicly available tools such as ChatGPT or Google Bard.

The report suggests that state departments could use generative AI tools in these beneficial ways:

Summarizing meetings

Analyzing public feedback surveys

Monitoring the status of public infrastructure, such as roads and bridges

Translating public documents into different languages

Generating promotional materials, such as fliers and social media posts

However, the report also pointed out the numerous risks associated with using these tools for daily operations. Chief among those concerns was the reliability of the models and algorithms that power the AI tools. Erroneous information created by government AI tools could lead to the spread of harmful misinformation and disinformation, and as a result, Californians’ safety could be jeopardized.

Fairness was also a concern, as unconscious biases like race and gender could infect the AI model and unfairly deem applicants ineligible for certain government programs.

“This could reasonably erode Californians’ trust in their government and its services,” the report reads. “GenAI should center on the needs of the human workforce, support the carrying out of responsibilities to Californians, and avoid contributing to additional bureaucracy, process, or safety risks.”

Want state worker news in your inbox? Sign up for our weekly newsletter using the module below or by emailing mmiller@sacbee.com.