Google’s AI Overviews, designed to give quick answers to search queries, reportedly spits out “hallucinations” of bogus information and undercuts publishers by pulling users away from traditional links.
The Big Tech giant — which landed in hot water last year after releasing a “woke” AI tool that generated images of female Popes and black Vikings — has drawn criticism for providing false and sometimes dangerous advice in its summaries, according to The Times of London.
In one case, AI Overviews advised adding glue to pizza sauce to help cheese stick better, the outlet reported.
In another, it described a fake phrase — “You can’t lick a badger twice” — as a legitimate idiom.
The hallucinations, as computer scientists call them, are compounded by the AI tool diminishing the visibility of reputable sources.
Instead of directing users straight to websites, it summarizes information from search results and presents its own AI-generated answer along with a few links.
Laurence O’Toole, founder of the analytics firm Authoritas, studied the impact of the tool and found that click-through rates to publisher websites drop by 40%–60% when AI Overviews are shown.
“While these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve,” Liz Reid, Google’s head of Search, told The Times in response to the glue-on-pizza incident.
The Post has sought comment from Google.
AI Overviews was introduced last summer and powered by Google’s Gemini language model, a system similar to OpenAI’s ChatGPT.
Despite public concerns, Google CEO Sundar Pichai has defended the tool in an interview with The Verge, stating that it helps users discover a broader range of information sources.
“Over the last year, it’s clear to us that the breadth of area we are sending people to is increasing … we are definitely sending traffic to a wider range of sources and publishers,” he said.
Google appears to downplay its own hallucination rate.
When a journalist searched Google for information on how often its AI gets things wrong, the AI response claimed hallucination rates between 0.7% and 1.3%.
However, data from the AI monitoring platform Hugging Face indicated that the actual rate for the latest Gemini model is 1.8%.
Google’s AI models also seem to offer pre-programmed defenses of their own behavior.
In response to whether AI “steals” artwork, the tool said it “doesn’t steal art in the traditional sense.”
When asked if people should be scared of AI, the tool walked through some common concerns before concluding that “fear might be overblown.”
Some experts worry that as generative AI systems become more complex, they’re also becoming more prone to mistakes — and even their creators can’t fully explain why.
The concerns over hallucinations go beyond Google.
OpenAI recently admitted that its newest models, known as o3 and o4-mini, hallucinate even more frequently than earlier versions.
Internal testing showed o3 made up information in 33% of cases, while o4-mini did so 48% of the time, particularly when answering questions about real people.