Hacker Newsnew | past | comments | ask | show | jobs | submit | mavamaarten's commentslogin

In my limited experience, that's mostly since the 4.6 release. I noticed that with the same prompt, it answers much more briefly. A bit jarring indeed, but I prefer it. Less bs and filler, and less burning off electricity for nothing.

This behavior first appeared in 4.5, mostly for specific types of questions and in "natural conversation" workflows. 4.6 might have pushed it further.

It’s probably an offshoot of making Claude more and more suitable for code/cowork.

Even (uncommon) country TLD's too. I own a .vg domain which is a perfect match with the initials of my last name. My mails end up in spam quite often too, despite having set up SPF, DKIM, DMARC and all that stuff correctly. It's just not common so some security systems block it.

It's not just about being common, it's also about the share of abuse coming from such domains.

Or just incompetence, I had to lobby to get .org unblocked for mail at some CS faculty of a (not my) university, 20 years ago.

Usually not, just look at for example SpamHaus's top abusive TLDs. New TLDs dominate.

So, what you're saying is that Google should work on better privacy controls. Right? Right???

I agree. On top of that, in true Google style, basic things just don't work.

Any time I upload an attachment, it just fails with something vague like "couldn't process file". Whether that's a simple .MD or .txt with less than 100 lines or a PDF. I tried making a gem today. It just wouldn't let me save it, with some vague error too.

I also tried having it read and write stuff to "my stuff" and Google drive. But it would consistently write but not be able to read from it again. Or would read one file from Google drive and ignore everything else.

Their models are seriously impressive. But as usual Google sucks at making them work well in real products.


I don't find that at all. At work, we've no access to the API, so we have to force feed a dozen (or more) documents, code and instruction prompts through the web interface upload interface. The only failures I've ever had in well over 300 sessions were due to connectivity issues, not interface failures.

Context window blowouts? All the time, but never document upload failures.


I'm talking about Gemini in the app and on the web. As well as AI studio. At work we go through Copilot, but there the agentic mode with Gemini isn't the best either.


Honestly this is as Google product as you can get. Prizes for some, beatings for others.


What I love about Gemini mobile is that, if you look at the app wrong, it completely loses the response. It still generates it (and uses up your quota), but it never displays it!

This is the company that made Android, and it can't make an Android app that fetches a response from a server. Astonishing.


For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.

Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.


Yeah. Plus the fact that ulauncher is already an existing launcher-type app for Linux: https://ulauncher.io/


Nah that's not how it works. Streaming video is usually cut up into small segments. By having a couple of variants per segment, they can serve you a unique and identifiable sequence of segments without having to decompress (and encrypt) them for each user.


Indeed. Don't want people making guns? Ban the making of guns. Banning the production of guns using a 3D printer makes zero sense, should ban CNC machines too then.


Subtractive mfg is included in the scope of the Washington bill...


Just ban guns, simple! /s


So you pasted someone's comment in an LLM and posted the output here. Cool. Not really.


He's Chinese and if you had looked into his comment history you'd know this is not someone who uses LLMs for karma farming and looking at his blog he has a long history of posting about database topics going back before there was GPT.

Should I ever participate in a Chinese speaking forum, I'd certainly use an LLM for translation as well.


Looks to me like they're using an LLM for _translation_, not for generating a response. The model output even says "Here's the _translation_" (emphasis mine).


That would be great, honestly. Imagine just being able to install Android apps like Netflix, Disney+, ... On your Steam Deck or Steam Machine and having it work out of the box with Widevine L1. Then you'd truly just only need one device attached to your TV for all your entertainment needs. And then a great and supported one at that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: