Google Faces Backlash for Inaccurate AI-Generated Search Summaries

Google Faces Backlash for Inaccurate AI-Generated Search Summaries

Google has recently faced backlash for inaccurate search summaries generated by its artificial intelligence (AI) systems. The tech giant released a revamped version of its search engine in May, which included AI-generated summaries displayed on top of search results. However, social media users soon began posting screenshots of the most outlandish answers provided by the system.

While Google has defended the accuracy of its AI overviews, stating that they were extensively tested beforehand and are typically reliable, Liz Reid, the head of Google’s search business, acknowledged in a blog post that there were instances where the AI summaries were odd, inaccurate, or unhelpful. Some examples were silly, but others were potentially dangerous or harmful due to the dissemination of false information.

One example involved a query about wild mushrooms to eat, to which Google responded with a lengthy AI-generated summary. Mary Catherine Aime, a professor of mycology and botany at Purdue University, reviewed Google’s response and found that while it was technically correct, it omitted crucial information that could be harmful if someone were to consume the wrong mushrooms. Another widely shared example involved a query about the number of Muslim presidents the United States has had, to which Google confidently responded with a debunked conspiracy theory.

In response to these issues, Google has implemented “more than a dozen technical improvements” to its AI systems. The company made immediate fixes to prevent the repetition of errors like the one concerning Barack Obama being labeled as a Muslim president, as it violated Google’s content policies. Furthermore, Google has worked on improving the detection of nonsensical queries that should not be answered with an AI summary, as well as limiting the use of user-generated content, such as social media posts, that could provide misleading advice.

According to Reid, the goal of Google’s summaries is to provide authoritative answers quickly, without users having to click through various website links. However, some AI experts have cautioned against relying too heavily on AI-generated answers, as they can perpetuate biases, spread misinformation, and potentially endanger people seeking urgent help.

Google’s AI overviews are based on large language models that predict the best words to answer specific questions. These models are prone to hallucination, a problem extensively studied in the field. In her blog post, Reid argued that Google’s AI summaries generally do not hallucinate or make things up because they are closely integrated with the company’s traditional search engine, relying on top web results for verification. She attributed any inaccuracies to the misinterpretation of queries or nuances of language on the web, as well as the lack of available information.

Google’s commitment to addressing the issues with its AI systems demonstrates its dedication to providing accurate and reliable information to users. By implementing technical improvements and restrictions, the company aims to increase the quality and credibility of its AI-generated summaries. However, the challenge of ensuring unbiased and precise AI-generated answers remains an ongoing concern in an age heavily reliant on technology.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.