Hacker Newsnew | past | comments | ask | show | jobs | submit | seabass's commentslogin

Feels short sighted. Every such change gets me closer to ditching the ecosystem altogether.


Wish this were built into the OS! Love the idea


Definitely going to give this a try. One thing I'm curious about--where are the servers? And if I want to choose hosting geographically close to me, how do I do that?


How much is “the public” making? The title of the post says millions. The title of the article says trillions. The second paragraph of the article says not trillions. Sheesh


I love squoosh! It’s been one of the few PWAs I have installed and actually use regularly.

Does anyone know if their optimization methods still best-in-class these days? It’s been good enough for all my practical needs, but I know it’s been around for a while and there may be better techniques for some file types now.


Love this! Just wanted to note that I think there’s a mistake on the flyweight pattern page’s example. You’re using getting a boolean with Set.has but treating it like a Book (as if you had used Set.get). I also don’t really understand how this saves memory if you’re spreading the result into a new object, but maybe someone here can enlighten me!


Ah I think I understand now. The return type of createBook is true | Book, which is likely a mistake, but happens to work because when you attempt to spread a boolean into an object it removes itself. But if you were to edit the example to have a stable return type of Book then it would no longer save memory, so perhaps that was intentional?


I’m surprised by how good it looks. This is really cool! I do feel like the Q and 4 characters need a little manual tweaking since the blur+threshold technique leaves some artifacts in the corners but those are such minor issues given how readable this font is overall. Love it.


You can compare the writing style from the earlier articles like this from 2020, pre-GPT.

https://andreacanton.dev/posts/2020-02-19-git-mantras/


Strongly disagree. If you read enough of it the patterns in ai text are so familiar. Take this paragraph for example:

> Here’s what surprised me: the practices that made my exit smooth weren’t “exit strategies.” They were professional habits I should have built years earlier—habits that made work better even when I was staying.

“It’s not x—it’s y.”, the dashes, the q&a style text from the parent comment, and overall cadence were too hard to look past.

So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.


Regardless, people are going to start writing naturally like current LLM output, because that's a lot of what they are reading.

A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

So I tried reading some HST myself, and... some open source code documentation immediately got a little punchy.

> So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.

Good point. And if it's actually genuine original text from someone whose style was merely tainted by reading lots of "AI" slop, I guess that might be a reason to prefer reading someone who has a healthier intellectual diet.


> A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

That is honestly incredible and actionable advice.

Can’t wait to sprinkle a taste of the eldritch in my comments after reading some Lovecraft.


Curious - is your concern that the post is 100% AI generated? Or do you object that AI may have been used to clean up the post?


AI writing often leads to word inflation, so getting the original more concise one is helpful IMO. Hiding it is the annoying part, marking that you use AI to help you and having a 'source code' version I think would go over much better. If a person is deceptive and dishonest about something so obvious, how can you trust other things they say?

It also leads to slop spam content. Writing it yourself is a form of anti-spam. I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.

And because they are so low effort, it feels like putting links to a google search essentially. Higher noise, lower signal.


> I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.

I found Grammerly to be often incorrect, but it's been years since I tried it. I use LanguageTool instead, simply to catch typos.


Ok, well this post seems very similar style from the same author. Why isn't this ai also? https://andreacanton.dev/posts/2020-02-19-git-mantras/


It has a bunch of human imperfections, and I love that. The lowercase lists and inconsistent casing for similarly structured content throughout, the grammar mistakes, and overall structure. This article has a totally different feel compared to the newest ones. When you say it’s very similar, what are you picking up on? They feel like night and day from my perspective.


LLMs got all these patterns from humans in the first place*. They're common in LLM output because they're common in human output. Therefore this argument isn't very reliable.

If P is the probability that a text containing these patterns was generated by an LLM, then yes, P > 0, but readers who are (understandably) tired of generated comments are overestimating P.

* Edit: I see now that the GP comment already said this.


Looks really cool! Love the artwork. Right now the video in the readme doesn’t render on github, though. I had to manually download the mp4 from your demo folder to view it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: