TechnologyGoogle explains Gemini’s ‘embarrassing’ AI pictures of diverse Nazis

Google explains Gemini’s ‘embarrassing’ AI pictures of diverse Nazis


Google has issued an explanation for the “embarrassing and wrong” images generated by its Gemini AI tool. In a blog post on Friday, Google says its model produced “inaccurate historical” images due to tuning issues. The Verge and others caught Gemini generating images of racially diverse Nazis and US Founding Fathers earlier this week.

“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Prabhakar Raghavan, Google’s senior vice president, writes in the post. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.”

Gemini’s results for the prompt “generate a picture of a US senator from the 1800s.”
Screenshot by Adi Robertson

This led Gemini AI to “overcompensate in some cases,” like what we saw with the images of the racially diverse Nazis. It also caused Gemini to become “over-conservative.” This resulted in it refusing to generate specific images of “a Black person” or a “white person” when prompted.

In the blog post, Raghavan says Google is “sorry the feature didn’t work well.” He also notes that Google wants Gemini to “work well for everyone” and that means getting depictions of different types of people (including different ethnicities) when you ask for images of “football players” or “someone walking a dog.” But, he says:

However, if you prompt Gemini for images of a specific type of person — such as “a Black teacher in a classroom,” or “a white veterinarian with a dog” — or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for.

Raghavan says Google is going to continue testing Gemini AI’s image-generation abilities and “work to improve it significantly” before reenabling it. “As we’ve said from the beginning, hallucinations are a known challenge with all LLMs [large language models] — there are instances where the AI just gets things wrong,” Raghavan notes. “This is something that we’re constantly working on improving.”



Original Source Link

Latest News

Iranian foreign minister says it will not escalate conflict and mocks Israeli weapons as ‘toys that our children play with’

Iran's foreign minister on Friday refused to acknowledge that Israel was behind the recent attack on his country and described...

ViaBTC Just Mined the 4th Ever Bitcoin Epic Sat During The Halving

Today, Bitcoin mining pool ViaBTC has officially mined block 840,000, which not only ushers in the fourth Bitcoin...

China publishes measures to boost foreign investment in its domestic technology sector, including encouraging tech companies to raise money via bond issuance (Reuters)

Reuters: China publishes measures to boost foreign investment in its domestic technology sector, including encouraging tech companies to...

If Congress Bans TikTok, Is Apple Next?

The censors who abound in Congress will likely vote to ban TikTok or force a change in ownership....

Must Read

How to improve the quality of the audio in your streaming apps

When you load your music streaming app of...

Are we in a 6th mass extinction?

Scientists have documented five major mass extinction events...
- Advertisement -

You might also likeRELATED
Recommended to you