Google Temporarily Suspends Image Generation in Gemini Chatbot Following Accuracy Concerns

Google announced on Thursday that it would “pause” the image generating feature of its Gemini chatbot, which had received backlash for producing “diverse” but historically and factually incorrect images of the Founding Fathers, including black Vikings, female popes, and Native Americans.
Google Temporarily Suspends Image Generation in Gemini Chatbot Following Accuracy Concerns

Once demands to produce representative images for topics led to the strangely revisionist images, social media users criticized Gemini as “absurdly woke” and “unusable.”

In a statement posted on X, Google stated, “We’re already working to address recent issues with Gemini’s image generation feature.” “We’re going to stop creating people’s images while we do this, and we’ll re-release a better version soon.”

Examples include an AI image of a black man dressed in a Continental Army suit and wearing a white powdered wig that seemed to be George Washington, and a Southeast Asian woman dressed like a pope, despite the fact that all 266 popes in history have been white men.

Google Gemini was mocked online for producing "woke" versions of historical figures

Gemini even produced “diverse” images of Nazi-era German soldiers, including an Asian woman and a black man dressed in 1943 military uniform.

Google has not released the guidelines that control the Gemini chatbot’s behavior, so it is challenging to understand why the program was creating various profiles of historical people and events.

Google has failed to publish the parameters that govern Gemini’s behavior

“In the name of anti-bias, actual bias is being built into the systems,” said William A. Jacobson, a law professor at Cornell University and the creator of the Equal Protection Project.

“This is a problem not only for search results, but also for real-world applications where aiming for end results that resemble quotas through “bias free” algorithm testing really introduces bias into the system.”

Fabio Motoki, a lecturer at the University of East Anglia in the UK and co-author of a report last year that identified a noticeable left-leaning bias in ChatGPT, suggests that the issue may be with Google’s “training process” for the “large-language model” that powers Gemini’s picture tool.

“Remember that people’s feedback about what is better and worse is what constitutes reinforcement learning from human feedback (RLHF),” Motoki said in an interview. “This effectively shapes the model’s ‘reward’ function, which is technically its loss function.”

“Therefore, this issue may arise depending on the individuals Google hires or the guidance Google provides them.”

It was a huge blunder for the search giant, which had only last month changed the name of its primary AI chatbot from Bard and added much-hyped new features- including image generation.

The glitch also occurred a few days after OpenAI, the company behind the well-known ChatGPT, unveiled Sora, a new AI tool that makes videos in response to text input from users.

Google already accepted that changes were necessary to address the chatbot’s unpredictable behavior.

Jack Krawczyk, senior director of product management at Google for Gemini experiences, told that they were working to improve these kinds of depictions immediately.

“Gemini’s AI image generation produces a diverse set of people. And this is generally a positive thing because people all over the world use it. But it’s off the mark here.”

When contacted for further comment on its trust and safety criteria, Gemini stated that they were not “publicly disclosed due to technical complexities and intellectual property considerations.”

In response to prompts, the chatbot admitted it was aware of “criticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top