Google Apologizes for Faulty AI Image-Generator, Vows Extensive Testing

Google Apologizes for Faulty AI Image-Generator, Vows Extensive Testing

Google Apologizes for Faulty AI Image-Generator, Vows Extensive Testing for Safe Rollout

In a surprising turn of events, Google issued an apology on Friday for the faulty rollout of its new artificial intelligence image-generator. The tech giant acknowledged that the tool, which was intended to create diverse and accurate images, had “missed the mark." In some cases, the AI would “overcompensate” in an attempt to display a diverse range of people, even when it didn’t make sense. This issue became apparent when users noticed that the tool was placing people of color in historical settings where they wouldn’t typically be found.

The problem came to light when Google temporarily halted its Gemini chatbot from generating images with people in response to a social media outcry claiming the tool had an anti-white bias. The controversy surrounding the tool arose after users noticed that it generated racially diverse images in response to written prompts. Some examples that drew attention depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers.

Prabhakar Raghavan, Google’s senior vice president who oversees the search engine and other businesses, addressed the issue in a blog post, stating, “It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users' feedback and are sorry the feature didn’t work well.” However, Raghavan did not provide specific examples of the problematic images.

The new image-generating feature was added to Google’s Gemini chatbot, also known as Bard, about three weeks ago. This tool was built upon an earlier Google research experiment called Imagen 2. It’s worth noting that the researchers who developed Imagen had warned in a 2022 technical paper that generative AI tools could be used for harassment, spreading misinformation, and had concerns regarding social and cultural exclusion and bias.

Google had initially decided not to release a public demo or the underlying code of Imagen due to these concerns. However, the pressure to release generative AI products publicly has increased as a result of the competition between tech companies seeking to capitalize on the growing interest in this emerging technology.

It’s important to note that Google’s mistake with Gemini is not the first incident involving faulty image-generators. Microsoft was forced to make adjustments to its Designer tool after reports emerged of users creating deepfake pornographic images of celebrities. Studies have also shown that AI image-generators can amplify racial and gender stereotypes found in their training data. Without filters, they tend to display lighter-skinned men when prompted to generate images of people in various contexts.

Raghavan emphasized that Google tried to anticipate and avoid these issues when developing Gemini. He stated, “When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology.” He further clarified that the tool aims to work well for all users, regardless of their race, ethnicity, or cultural context.

In response to the backlash, Raghavan announced that Google would conduct extensive testing before reactivating the chatbot’s ability to generate images of people. However, this commitment was met with disappointment from University of Washington researcher Sourojit Ghosh, who studies bias in AI image-generators. Ghosh expressed his frustration with Raghavan’s disclaimer, mentioning that with Google’s vast resources and access to data, producing accurate and unbiased results should be a minimal expectation.

The controversy surrounding Google’s faulty AI image-generator was further amplified on social media, with Elon Musk, CEO of X (formerly Twitter), taking to Twitter to criticize the company. Musk, who has his own AI startup, accused Google of “insane racist, anti-civilizational programming” and has previously criticized both rival AI developers and Hollywood for alleged liberal bias.

As Google takes steps to rectify the issues with its AI image-generator, it is clear that extensive testing and careful consideration of bias and cultural sensitivity are crucial in the development of such technologies. The incident serves as a reminder that even companies with vast resources and technical expertise can stumble when it comes to the ethical challenges presented by artificial intelligence.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.