Hacker Newsnew | past | comments | ask | show | jobs | submit | 000ooo000's commentslogin

>"Used AI"

>"Wrote this in a day"

>"So please forgive any imprecision or inaccuracies"

Um, no? You (TFA author) want people to read/review your slop that you banged together in a day and let the shit parts slide? If you want to namedrop some AI heavy hitter to boost your slop, at least have the decency to publish something you put real effort into.


i genuinely wrote this in a day. ive been in ai for 9 years, well before chatgpt came out. i used Claude Code to turn it from my notion draft (spelling mistakes, no formatting, etc) into a well-formatted markdown file. you don't need to believe me, move on with your life. the guide is free and is meant to genuinely help someone use AI in a better way

You are not talking to the author. The comment was a quote from TFA, written (or, well, prompted) by someone else.

I know, that's why I'm quoting the author and not the commenter, and why I said "you (TFA author)"

LLMs are doing for ideas what social media did for opinions.

Have to wonder about the motivations of research when the intro leads with such a quote.

mv --interactive

cp --interactive

rm --interactive


It makes a lot of sense when you're holding a chord on a MIDI keyboard with one hand and dragging various knobs with a mouse in the other. Once you know the params you want to tune, you can obviously automate or map them to a MIDI controller, but doing that upfront slows things down considerably.

Source?


Save anyone else the click

>Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)


I personally blame this on instruction tuning. Base models are in my mind akin to the Solaris Ocean. Wandering thoughts that we aren't really even trying to understand. The tuned models, however, are as if somebody figured out a way to force the Solaris Ocean to do their bidding as the Ocean understands it. From this perspective it is clear that giving everyone barely restricted ability to channel the Ocean thoughts into actions leads to outcomes that we now observe.

Any positive change to my output is likely only because I now need to use it to supplement Google searching, because Google search is so damn awful nowdays.

But to describe my latest (latest, not only) experience with an LLM: I was with my toddler last night and I wanted to create a quick and dirty timer displayed as a pizza (slices disappear as timer depletes) to see if that can help motivate him during dinner. HTML, JS, SVG.. thought this would be cake for OpenAI's best free model. I'm a skeptic for sure, but I follow along enough to know there's voodoo to the prompt, so I tried a few different approaches, made sure to keep it minimal beyond the basic reqs. It couldn't do it: first attempt was just the countdown number inside an orange circle; after instruction, the second attempt added segments to the orange circle (no texture/additional SVG elements like pepperoni); after more instruction, it added pepperoni, but now there was a thick border around the entire circle even where slices had vanished. It couldn't figure this one out, with its last attempt just being a pizza that gradually loses toppings. I restarted the session and prompted with some clarifications based on the previous session but it was just a different kind of shit.

Despite being a skeptic I'm somewhat intrigued by the idea of agents chipping away at problems and improving code, but I just can't imagine anyone using this for anything serious given how hard it fails at trivial stuff like this. Given that MS guy is talking big game about planning to rewrite significant parts of Windows in Rust using AI, and is not talking about having rewritten significant parts of Windows in Rust using AI, I remain skeptical of anyone saying AI is doing heavy lifting for them.


Only a minor suggestion: git worktrees is a semi-recent addition that may be nicer than your git archive setup

Your hook can't observe a simple env var, if you are stepping off the happy path of your workflow? E.g. `GIT_HOOK_BYEBYE` = early return in hook script. Article seems a little dramatic. If you write a pre-commit hook that is a pain in your own arse, how does that make them fundamentally broken?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: