Gavin Newsom signed an executive order on artificial intelligence. What happens now? | Opinion

  • Oops!
    Something went wrong.
    Please try again later.

In recently signing an executive order, Gov. Gavin Newsom has established a path for state government to realize the benefits of artificial intelligence while governing its real and worrisome risks. But the state has a steep learning curve ahead and could use some help from California’s vast higher education system to manage this fast-emerging technology.

“We’re neither frozen by the fears nor hypnotized by the upside (of AI),” declared Newsom. Yet, unless we fix the government’s ability to understand technology, California agencies may well be frozen by what the executive order requires.

Modern AI, or machine learning, is essentially a set of tools that extract patterns from vast quantities of data. Cutting edge AI models, for instance, can be trained on the entire internet.

The focus of the executive order is “Generative AI,” which uses very big models to generate things, like imagery, text and audio.

The language model ChatGPT, for instance, provides detailed answers to human questions is based on a Generative AI model that can generate highly realistic text in a wide range of different styles (think letters, blog posts, essays and even computer code), captivating the imagination of an estimated 100 million active users.

Generative AI has also set off alarm bells for its potential to cause a wide range of serious harms. AI models can, for instance, spew out toxic and racist content, leak confidential or private information, displace workers and create substantial misinformation.

While the executive order itself runs a mere seven pages, it requires a lot from state government: State agencies have to compile a report on how they should use Generative AI to assess the technology’s threats to energy infrastructure; to consider Generative AI pilot projects for service delivery; to conduct an inventory of “high-risk uses” of Generative AI; and to train state employees on AI skills.

The only problem? Government lacks the people power to carry out such an ambitious agenda. Doing any of these things requires technical expertise. And especially in areas of emerging technology — and, acutely for Generative AI — government is woefully behind. People working on Generative AI technology are generally not going into government. One report shows, for instance, that three-fifths of recent AI PhDs go into the private sector, a quarter go into academic positions and less than 2% consider a career in the public sector.

Opinion

This dearth of technical talent means that big aspirations are too often met with failures of implementation. Consider the Newsom order’s requirement, for instance, for agencies to submit inventories of Generative AI use cases. When the federal government required something similar, only 50% of agencies were able to comply, and implementation was all over the place.

How do we fix this? The order itself has the solution within sight: There is a “unique opportunity for academic research and government collaboration.”

Much AI expertise resides in universities, and a compelling solution lies in creating a talent pipeline between the academy and state government. The federal government has such a pipeline, which has enabled it to rapidly patch the urgent need for talent in science and emerging technology. Indeed, the first director of the White House National AI Initiative Office was appointed under this mechanism.

As we have spelled out in a Stanford policy brief, California should create such a mechanism for state agencies to draw on the expertise that resides across its public and private universities. Universities should be training AI talent not just for private companies, but for government, too. A simple change could create such a talent pipeline and secure the future of how the state grapples with such technology.

Newsom deserves much praise for this landmark set of commitments. And just as the federal government has drawn on academic expertise in AI, so should California.

Daniel Ho is a professor of law and political science at Stanford University, a senior fellow in the Stanford Institute for Economic Policy Research and director of Regulation, Evaluation and Governance Lab (RegLab). He is also a member of the National AI Advisory Commission, which advises the White House on AI policy. Ho previously met with Gov. Gavin Newsom’s team on the state’s approach to Generative AI.