Top artificial intelligence developers commit to security testing, clear labeling of AI-generated content

President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House on Friday, July 2, 2023, in Washington, as from left, Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, president of OpenAI; Nick Clegg, president of Meta; and Mustafa Suleyman, CEO of Inflection AI, listen.
  • Oops!
    Something went wrong.
    Please try again later.

Seven U.S. tech companies racing to develop artificial intelligence tools are voluntarily committing to a new set of safeguards aiming to manage the risks of the advanced systems, according to a Friday announcement by the White House.

The agreements, made with Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI come amid a growing list of concerns about the abilities of AI tools to generate text, audio files, images and video that are becoming increasingly difficult to discern as content produced by humans or recordings of events or statements that actually occurred.

At a White House meeting with representatives from the tech companies Friday afternoon, President Joe Biden held a press conference where he outlined the goals of his administration in constructing public safeguards for the breakthrough digital tools.

“Artificial intelligence promises an enormous ... risk to our society and our economy and our national security, but also incredible opportunity,” Biden said. “These seven companies have agreed to voluntary commitments for responsible innovation. These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security and trust.”

Related

While Biden pointed to work his administration has done over the last year to guide artificial intelligence advancements including the creation of an AI Bill of Rights, executive action aiming to limit the use of discriminatory computer algorithms by federal agencies and a commitment to fund new AI research, lawmakers are struggling to construct new regulatory oversight for the fast-moving industry.

In May, the U.S. Senate convened a committee hearing that leaders characterized as the first step in a process that would lead to new oversight mechanisms for artificial intelligence programs and platforms.

Sen. Richard Blumenthal, D-Conn., who chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, called a panel of witnesses that included Sam Altman, the co-founder and CEO of OpenAI, the company that developed the ChatGPT chatbot, DALL-E image generator and other AI tools.

“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Blumenthal said.

Those past mistakes include, according to Blumenthal, a failure by federal lawmakers to institute more stringent regulations on the conduct of social media operators.

“Congress has a choice now,” Blumenthal said. “We had the same choice when we faced social media, we failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them.

“Congress failed to meet the moment on social media, now we have the obligation to do it on AI before the threats and the risks become real.”

Actions called for by the White House and committed to by the tech companies on Friday include:

  • Performing internal and third-party security testing of new AI systems before being released to the public.

  • Investing in cybersecurity and insider threat safeguards.

  • Transparent reporting practices when vulnerabilities are discovered.

  • Prioritizing research into potential harms of AI systems including bias, discrimination and privacy breaches.

  • Developing labeling or watermarking systems that clearly identify content that’s been generated or modified by AI systems.

White House chief of staff Jeff Zients told NPR that tech innovation comes with a built-in obligation to ensure new products don’t lead to harm for those who engage with them.

“U.S. companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure and trustworthy,” Zients said.

But, he also noted the voluntary agreements lack a defined recourse strategy if participating companies fail to meet guidelines on policy and conduct.

“We will use every lever that we have in the federal government to enforce these commitments and standards,” Zients said. “At the same time, we do need legislation.”

Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable, per The Associated Press.

“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” James Steyer, founder and CEO of the nonprofit Common Sense Media, said in a press statement.

Altman has become something of a de facto figurehead when it comes to generative AI tools thanks to the enormous response to the inquiry-based ChatGPT platform his company has developed. The platform went public last November and has since attracted over 100 million users. ChatGPT answers questions and can produce prompt-driven responses like poems, stories, research papers and other content that typically read very much as if created by a human, although the platform’s output is notoriously rife with errors.

Altman was among a trio of witnesses called to the Senate hearing in May and he readily agreed with committee members that new regulatory frameworks were in order as AI tools in development by his company and others continue to take evolutionary leaps and bounds. He also warned that AI has the potential, as it continues to advance, to cause widespread harms.

“My worst fears are that we, the field of technology industry, cause significant harm to the world,” Altman said. “I think that can happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that.

“We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”