US aims to tackle risk of uncontrolled race to develop AI

<span>Illustration: chombosan/Alamy</span>
Illustration: chombosan/Alamy

The White House has announced measures to address the risks of an unchecked race to develop ever more powerful artificial intelligence, as the US president and vice-president, Joe Biden and Kamala Harris, met chief executives at the forefront of the industry’s rapid advances.

In a statement released ahead of the meeting with the leaders of Google, Microsoft and OpenAI, the company behind ChatGPT, the US government said firms developing the technology had a “fundamental responsibility to make sure their products are safe before they are deployed or made public”.

Concerns are mounting that if AI is allowed to develop unchecked, its application by private companies could threaten jobs, increase the risk of fraud and infringe data privacy.

The US government said on Thursday it would invest $140m (£111m) in seven new national AI research institutes, to pursue AI advances that are “ethical, trustworthy, responsible and serve the public good”. AI development is dominated by the private sector, with the tech industry producing 32 significant machine-learning models last year, compared with three produced by academia.

Leading AI developers have also agreed to their systems being publicly evaluated at this year’s Defcon 31 cybersecurity conference. Companies that have agreed to participate include OpenAI, Google, Microsoft, and Stability AI, the British firm behind the image-generation tool Stable Diffusion.

“This independent exercise will provide critical information to researchers and the public about the impacts of these models,” said the White House.

Biden, who has used ChatGPT and experimented with it, told the officials they must mitigate current and potential risks AI poses to individuals, society and national security, the White House said.

In a statement released after the meeting, Harris said technological advances had always presented risks and opportunities and generative AI – the term for products like ChatGPT or image generator Stable Diffusion – was “no different”. She added that she had told executives at the meeting the private sector has an “ethical, moral and legal responsibility to ensure the safety and security of their products”.

Another policy announced on Thursday involves the president’s Office of Management and Budget releasing draft guidance on the use of AI by the US government.

Last October the White House published a blueprint for an “AI bill of rights” that called for protection from “unsafe or ineffective systems”, including pre-launch testing and regular monitoring, alongside protection from abusive data practices such as “unchecked surveillance”.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White House’s announcement as a “useful step” but said more aggressive action is needed. Weissman said this should include a moratorium on the deployment of new generative AI technologies.

“At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race and each believes themselves unable to slow down,” he said.

The UK’s competition regulator also flagged concerns about AI development on Thursday, as it opened a review into the models that underpin products such as ChatGPT and Google’s rival chatbot, Bard. This week a British computer scientist described as the godfather of AI, Dr Geoffrey Hinton, quit Google in order to speak freely about the dangers of AI.

• The headline and introduction of this article were amended on 4 May 2023 to reflect the fact the story is about the race to develop AI, not the AI “arms race”.

Advertisement