Google making changes after Gemini AI portrayed people of color inaccurately

Google said Thursday that it would temporarily limit the ability to create images of people with its artificial-intelligence tool Gemini after it produced illustrations with historical inaccuracies.

The pause was announced in a post on X after the company acknowledged the issues in a statement the day before, writing: “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Google announced its “next-generation” AI model, Gemini 1.5, last week, touting it as its most capable yet. Google is one of many tech companies that are feverishly competing to develop the best generative AI systems that can create text, images and video from simple prompts.

After its launch, Gemini drew the attention of some pundits and technologists, including many on the right who have been critical of efforts to make AI more inclusive, with some leveling accusations that Google’s AI was a prime example of “woke AI.”

The most widely noticed inaccuracies involved inserting people of color into places where they wouldn’t historically appear. For example, The Verge reported that asking Gemini to create an illustration of a 1943 German soldier resulted in AI-generated drawings of nonwhite people in Nazi uniforms. Gemini also created images of some nonwhite Founding Fathers and U.S. senators from the 1800s, when in reality they were all white men.

Gemini’s racially diverse image output comes amid long-standing concerns around racial bias within AI models, especially a lack of representation for minorities and people of color. Such biases can directly harm people who rely on AI algorithms, such as in health care settings, where AI tools can affect health care outcomes for hundreds of millions of patients.

Google isn’t the only Big Tech company addressing major issues with flagship AI tools this week — OpenAI, the company behind AI text generator ChatGPT, said Wednesday it had resolved an issue causing “unexpected responses” from ChatGPT. Users began noticing Tuesday that ChatGPT was malfunctioning, spitting out nonsensical sentences instead of its usual output.

“LLMs generate responses by randomly sampling words based in part on probabilities. Their ‘language’ consists of numbers that map to tokens,” OpenAI wrote in a status update on its website. “In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense.”

This article was originally published on