After making racially diverse Nazis, Google limits AI image creation

Black Nazi soldiers and people of colour colonial settlers - recent images generated by Google's AI appear to be taking diversity too far. Bernd von Jutrczenka/dpa
Black Nazi soldiers and people of colour colonial settlers - recent images generated by Google's AI appear to be taking diversity too far. Bernd von Jutrczenka/dpa

Google is no longer allowing its Gemini AI software to generate images of people after it was found that its efforts to display more people of colour led to inaccurately diverse depictions of history.

After images emerged on social media of racially diverse Nazi soldiers and American colonial settlers, the tech giant admitted that in some cases the depiction did not correspond to historical context and that the image generation would be temporarily limited.

At the same time, Google defended its efforts to make AI-generated images more diverse, even if the company was "missing the mark" in this case.

Following advances by rivals like Microsoft's chat assistant Copilot, Google Gemini was three weeks ago given a new feature allowing users to generate images from text specifications.

In a blog post on Friday, Google explained that it had failed to programme exceptions for cases in which diversity would definitely be out of place. The resulting images were "embarrassing and wrong," Google's Prabhakar Raghavan said.

At the same time, the software had become too cautious over time and refused to fulfil some requirements, Raghavan said. But if users wanted to display images of a "white veterinarian with a dog," the AI had to comply.

In recent years, there has often been a problem with stereotypes and discrimination in various AI applications. For example, facial recognition software was initially poor at recognising people with black skin. Many AI services for creating photos meanwhile started out depicting mostly white people.

Developers at other companies are therefore also endeavouring to achieve greater diversity in various scenarios.

In the US in particular, there is a loud movement, including tech billionaire Elon Musk, which denounces alleged racism against white people. In particular, the software's refusal to generate images of only white people drew their ire.

At the same time, Google manager Raghavan said AI-powered software will continue to be wrong sometimes for the time being. "I can't promise that Gemini won't occasionally produce embarrassing, incorrect or offensive results," he wrote. But Google will intervene quickly in the event of problems.