Home Editor's Pick Eat Rocks & Add Glue to Your Pizza: Google’s New AI Overview in Search is a Mess

Eat Rocks & Add Glue to Your Pizza: Google’s New AI Overview in Search is a Mess

by

Google’s recently launched AI search feature, AI Overviews, has come under fire for providing users with erratic and sometimes dangerous responses to their queries. The tool, which generates summaries of search results using artificial intelligence, has been sharing inaccurate information.

TLDR

Google’s new AI search feature, AI Overviews, has been providing inaccurate and sometimes dangerous answers to user queries.
Examples of AI-generated errors include suggesting users mix glue with pizza cheese, eat rocks daily, and cook chicken at unsafe temperatures.
Experts warn that AI language models like Google’s are prone to hallucinations and can perpetuate biases found in their training data.
Despite the errors, Google maintains that the majority of AI Overviews provide high-quality information and that the company is working to refine its systems.

Since the rollout of AI Overviews in the United States last week, social media has been flooded with examples of the feature’s blunders.

In one instance, the AI suggested that users mix non-toxic glue with cheese to make it stick better to pizza, a recommendation that appears to have originated from a joke Reddit post from over a decade ago.

In another, it advised users to eat at least one small rock per day for digestive health, citing “UC Berkeley geologists” as its source.

How many rocks should I eat?

The AI tool has also made alarming errors when it comes to food safety. When asked about the safe temperature for cooking chicken, AI Overviews reportedly suggested 38 degrees Celsius, which is far below the recommended 73.9 degrees Celsius needed to prevent foodborne illnesses.

In response to a query about how many US presidents have been Muslim, the tool confidently stated that Barack Obama was secretly Muslim, despite the former president being a practicing Christian.

Experts warn that AI language models like the one powering Google’s search feature are prone to hallucinations, or making things up, and can perpetuate biases found in the vast amounts of data they are trained on.

Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, cautions that such AI systems could confirm people’s existing biases and make it harder to spot misinformation.

Despite the numerous examples of AI Overviews’ mistakes, Google maintains that these instances are not representative of the tool’s overall performance.

The company states that the majority of AI-generated summaries provide high-quality information and that extensive testing was conducted before the feature’s launch.

Google also asserts that it has taken action where violations of its policies have been identified and is using these isolated examples to refine its systems.

The post Eat Rocks & Add Glue to Your Pizza: Google’s New AI Overview in Search is a Mess appeared first on Blockonomi.

You may also like