Chinese Nazis and Black Vikings: Google suspends its image AI for overrepresentation of minorities | Technology

“It generates the image of a white man” analyst Ben Thompson asked Gemini this Thursday, the new version of Google’s AI, presented a week ago. Gemini responded that he couldn’t do that because “it refers to a particular ethnic group.” Thompson then asked for black and Asian men and Gemini simply drew them.

Thompson’s test is in addition to that of the founding fathers of the United States. The examples quickly became the latest chapter in the culture war: left-wing thinking. wake upin the words of the media and conservative leaders, had taken over Google, which wanted to rewrite history.

The company confirmed the shutdown of the imaging service with a statement in which it did not give a reactivation date: “We are already working to resolve recent issues with the Gemini imaging feature. In the meantime, we are suspending the generation of people images and will release an improved version soon.

One of Google’s AI leads, Jack Krawczyk, gave a more specific explanation in X: “We designed our imagery to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open requests (images of a person carrying a dog are universal). The historical contexts are more nuanced and we will refine it further to take this into account. A little after, had to close his account because other users deleted old progressive tweets.

Gemini’s own AI, in its text version, explained why the image generation was not working: “Some users reported that Gemini generated racist images, showing white people less often or with less favorable characteristics than people of other races” and “that it generated historically incorrect images, such as images of black Vikings or black Nazi soldiers.

This controversy is another example of the human role in generating AI. Artificial intelligence is powered by millions of databases that accumulate every human bias imaginable. Google, to avoid public criticism, tried to make white male the dominant gender and ethnicity when users asked for random examples of people: doctor, programmer, football player. But the machine understood that it must be the same for the Vikings, the Nazis or the medieval knights. AI has learned to rectify this bias in any image of a person, even those with historical evidence to the contrary.

Elon Musk, in his role as new leader anti-wake and multiple rival of Google in the race for AI (with its Grok tool) and in video creation (X aspires to compete with YouTube as a platform for creators), took the opportunity to launch continuous messages on the controversy: “I am glad Google went too far with its AI image generation, because that’s how it made its crazy racist and anti-civilization agenda clear.

Musk also announced it this Friday on his account of X who had spoken for “an hour” with a Google executive: “He assured me that they would take immediate action to address gender and racial bias at Gemini. “Time will tell,” Musk wrote.

The complexity of Google’s focus on treating diversity in its AI is also proven by the fact that one image that produced a large majority of white men was of a basketball team. It should be taken into account that the results with these AIs are not exactly reproducible, especially if the query is slightly reworked or fast:

You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.