Why Google’s AI Is Criticizing Itself and Its Search Results
Google’s AI feature, called AI Overviews, is finally giving some honest feedback — but not in the way most would hope. Recently, when people asked why Google’s search feels so bad lately, the AI responded with a list of reasons that included some pretty harsh self-criticism. It pointed out that the search engine is now full of ads and self-promotion, and that Google has been accused of manipulating search rankings and relying heavily on AI-generated content.
The AI’s replies didn’t hold back. It admitted that many AI answers can be inaccurate or misleading, sometimes offering harmful or nonsensical suggestions. It also mentioned that the rise of AI content has led to a flood of low-quality, keyword-stuffed articles that rank high but don’t really provide true insight. The AI even said that its own summaries might be inconsistent or too simple, often based on questionable sources.
When asked more politely about why Google’s search is so useless, the AI gave similar reasons. It said search results are cluttered with ads and irrelevant content, and that AI summaries sometimes aren’t accurate or thorough. It even suggested trying other search engines like Bing or DuckDuckGo as alternatives.
AI Overviews has also been criticized for hurting news sites. Since it provides quick answers without users clicking through to original articles, it reduces traffic to the sources that produce the content. Google claims that it provides citations for its answers, but studies show that very few users actually click on these links. Plus, the sources cited tend to be dubious — often outdated Reddit posts, Quora threads from years ago, or low-quality blogs. The few trustworthy sources include a short university explainer and a recent Mashable article noting that AI Overviews still struggles with basic questions.
This pattern of citing questionable sources isn’t new for Google’s AI. In fact, some of the sources for why Google is “useless” are years old or irrelevant. For example, a 2022 Google Help post or a 2021 Reddit comment are hardly current or authoritative. Even the critique of Google’s “ensh*ttification” — a slang term for low-quality content — now feels outdated, especially as AI takes over more of the web.
There are reasons behind this. Google’s algorithms, which determine what shows up in search results, are notoriously opaque. They’ve long favored big outlets like the New York Times or CNN over smaller, local sources. Now, with AI, that opacity seems to be causing similar problems. Outdated and user-generated content is sometimes pushing aside more trustworthy sources, leading to less accurate and less helpful search results.
The tech industry’s obsession with pushing AI into everything isn’t helping. These models, which often make stuff up and struggle with facts, are now a core part of Google’s main product. Since Google announced AI integration into search two years ago, the quality of results has fluctuated. Originally called the “Search Generative Experience,” then renamed “AI Overviews,” the feature has gone from suggesting bizarre or dangerous ideas to inventing fake etymologies and idioms.
The situation is a clear sign that AI, in its current form, is still a work in progress. Google has yet to fully solve the issues of accuracy, source reliability, and how its AI impacts the internet’s overall health. We’ve asked Google for a comment on all this, especially about the AI criticizing its own parent company, and will update if we hear back.
In the meantime, users are left navigating a search landscape where even the AI admits it’s flawed. As AI tools become more integrated into everyday life, understanding their limitations and biases is more important than ever.















What do you think?
It is nice to know your opinion. Leave a comment.