It's tough convincing people that Google AI overviews are often very wrong. People think that if it's displayed so prominently on Google, it must be factually accurate right?
"AI responses may include mistakes. Learn more"
It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.
It absolutely baffles me they didn't do more work or testing on this. Their (unofficial??) motto is literally Search. That's what they're known for. The fact it's trash is an unbelievably damning indictment of what they are
Testing on what? It produces answers, that's all it's meant to do. Not correct answers or factual answers; just answers.
Every AI company seems to push two points:
1. (Loudly) Our AI can accelerate human learning and understanding and push humanity into a new age of enlightenment.
2. (Fine print) Our AI cannot be relied on for any learning or understanding and it's entirely up to you to figure out if what our AI has confidently told you, and is vehemently arguing is factual, is even remotely correct in any sense whatsoever.
My favorite part of the AI overview is when it says "X is Y (20 sources)" and you click on the sources and Ctrl+F "X is Y" and none of them seem verbatim what the AI is saying they said so you're left wondering if the AI just made it up completely or it paraphrased something that is actually written in one of the sources.
If only we had the technology to display verbatim the text from a webpage in another webpage.
That's because decent (but still flawed) GenAI is expensive. The AI Overview model is even cheaper than the AI Mode model, which is cheaper than the Gemini free model, which is cheaper than the Gemini Thinking model, which is cheaer than the Gemini Pro model, which is still very misleading when working on human language source content. (It's much better at math and code).
I've gone through this cycle too, and what I realized is that as a developer a large part of your job is making sure the code you write works, is maintainable, and you can explain how it works.
I loved Dilbert in the 90s, and had no idea that Scott Adams got himself embroiled in controversy towards the end. Another funny guy that let his right leaning views become his entire personality.
I don't think he let his right-leaning views become his entire personality. Getting embroiled in controversy is something that happens because of the way other people react to your views, not directly because of those views themselves.
I really wish Lyft invested in maintenance. I used Citibike this week for the first time in about a year, and the Hudson River Greenway dock by NY Waterway had 1/3 of its empty docks broken with flashing red lights, then about 5 ebikes that needed service.
Are you sure that wasn't the "staggered" bike dock? It forces you to dock in the rear row if the neighboring two front row spaces are free. This is to fit more bikes. The blinking red docks aren't broken. They're intentionally unavailable.
Also, the 5 e-bikes probably didn't need "service", they were just waiting for battery swaps. This is by design. The docks don't charge them.
CitiBike maintenance is generally fine. They're not leaving any significant number of broken bikes or docks. I think you may have just misunderstood how it works.
As a a developer, you're not only responsible for contributing code. But verifying that it works. I've seen this practice be put in place on other teams, not just with LLM's, but with devs who contribute bugfixes without understanding the problem
it's not, but stupid people assume they own the copyright to ai induced code. So it still has to be said so the people who don't understand have a chance.
reply