I know it's common for everyone to shit on Google here on HN, but I have to say, I do like the AI overviews. They're easy to ignore and sometimes helpful... and I've seen lots of non-tech friends and family using them. I really don't get the vehement hatred the tech / HN community has for this feature (nor do I get the common complaint that "Google search is useless," it works better than all the others for me...)
I think it would get a lot less hate if Google just made it possible to opt-out of them without having to download some third-party browser extension (which is actually what Google's AI summary itself suggests I do when I ask it how to turn itself off!). A small "opt out" link on the top of the AI suggestion would make most of this hate go away, and would make me stop paying for Kagi, if Google cared.
I also find it extra frustrating when AI summaries appear when I search for correctness-critical information, e.g., "what temperature to cook chicken?" or "can I eat old eggs?" --- why force me to scroll past an entire page of AI generated, 1%-chance-of-being-a-literally-lethal-hallucination "summaries" in order to find the CDC's actual recommendation? I don't want to play Russian roulette with my health hoping I don't get a hallucination, instead I just want the authoritative answer. Which Google did an amazing job at until a year ago, and Kagi is doing a pretty great job at now.
For me, it's the fact that content generated by an LLM is fundamentally different than content that comes directly from a search index, but displaying them alongside each other conflates the two. Most people don't know the difference, and place the same level of importance (or maybe even more importance) on AI-generated content. Yes, this content is convenient. However, if the content isn't accurate or correct (which it may or may not be, given that it's just a statistically likely sequence of tokens) then is it actually beneficial as a whole?
The failure cases for them are really bad. I've lost count of the number of times I've seen people "prove" something on social media with a screenshot of a Google AI Overview that's parroting false information it found on the web.
Yes, it will happily regurgitate whatever false information there is on the web. e.g. you see this fake trailer for a Pokemon movie starring Tom Holland [0]. You ask yourself, "is that real?" and you go search "tom holland pokemon" in Google. The AI overview will tell you "Tom Holland has been cast as Ash Ketchum in a live-action adaptation of the Pokémon series. The movie is produced by Warner Bros. and The Pokémon Company." Confirmed! Except it's just spitting back the description it got from that fake trailer.
The downvotes confirm how HN feels but I largely agree with you. I do not find them wrong as often as everyone here claims and it appears most regular users are enjoying the feature at this point.