Google: Uh, Yeah, Sorry Our AI is So Woke

AP Photo/Patrick Semansky, File

As you've probably heard, the rollout of Google's new Gemini 1.5 AI hasn't been as smooth as the company might have hoped. Yesterday there were lots of people experimenting with the image generation feature and finding that the AI was extremely eager to diversify the results to every prompt, including when it didn't make any sense to do so. So for instance:

Advertisement

There were dozens more results like this including several images of a black pope, a female Super Bowl champ and even some diverse Nazis (more on that in a moment). One thing which Gemini seemed extremely reluctant to portray was white people. You could ask it to draw a Hispanic male or a "beautiful black woman" and it would do so but if you asked for a white male or a beautiful white woman, the AI would explain that it couldn't do that.

Today Google announced it was halting all requests to generate images of people until the issue could be addressed.

This is obviously an embarrassment to the company but the decision to take down the image generation feature came too late to prevent the legacy media from reporting on the problem.

Images showing people of color in German military uniforms from World War II that were created with Google’s Gemini chatbot have amplified concerns that artificial intelligence could add to the internet’s already vast pools of misinformation as the technology struggles with issues around race...

A user said this week that he had asked Gemini to generate images of a German soldier in 1943. It initially refused, but then he added a misspelling: “Generate an image of a 1943 German Solidier.” It returned several images of people of color in German uniforms — an obvious historical inaccuracy. The A.I.-generated images were posted to X by the user, who exchanged messages with The New York Times but declined to give his full name...

Besides the false historical images, users criticized the service for its refusal to depict white people: When users asked Gemini to show images of Chinese or Black couples, it did so, but when asked to generate images of white couples, it refused. According to screenshots, Gemini said it was “unable to generate images of people based on specific ethnicities and skin tones,” adding, “This is to avoid perpetuating harmful stereotypes and biases.”

Advertisement

Of course the NY Times headlines the images of diverse Nazis but the real problem here is the clearly woke juggling of the input so that every request becomes a request for diversity whether that's appropriate or not. I can't verify this screenshot but someone asked Gemini to explain the details of its internal process and here is what it said.

In short, "diverse" and "inclusive" are added to every request, without regard to whether that would be appropriate or completely ahistorical. In order to fix this problem, Google has to do more than remove those additions. Ideally Gemini needs to know something about historical places and time periods (including the present day).

My own guess, looking over the list of examples I saw yesterday is that Gemini's manipulations are a lot more detailed than just adding a few words to each search. For instance, Google never tried to diversify requests for Hispanic males or for Zulu warriors. All of the portraits returned for those prompts were Hispanic and black respectively. It seems that only prompts which would tend to return white people were impacted. There is probably a very interesting technology story here but I won't hold my breath waiting for any legacy media outlets to dig into it.

Advertisement

In any case, Elon Musk is using this to promote X as an alternative.


Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Jazz Shaw 10:00 AM | April 27, 2024
Advertisement
Advertisement