Hacker Newsnew | past | comments | ask | show | jobs | submit | gcau's commentslogin

The 'rewrite it in lua' crowd are oddly silent now.

How do you know?

[flagged]


Did you really go through the trouble of creating an account just to spit trash? Damn!

I'm a big fan of react, but all the server stuff was a cold hard mistake, it's only a matter of time before the (entire) react team realises it, assuming their nextjs overlords permit it.

Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.

However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.


To be fair, of all the LLM coding agents, I find Codex+GPT5 to be closest to this.

It doesn't really offer any commentary or personality. It's concise and doesn't engage in praise or "You're absolutely right". It's a little pedantic though.

I keep meaning to re-point Codex at DeepSeek V3.2 to see if it's a product of the prompting only, or a product of the model as well.


It is absolutely a product of the model, GPT-5 behaves like this over API even without any extra prompts.


I prefer its personality (or lack of it) over Sonnet. And tends to produce less... sloppy code. But it's far slower, and Codex + it suffers from context degradation very badly. If you run a session too long, even with compaction, it starts to really lose the plot.


>Maybe a kid managing to struggle through a shitty school has to work harder

It sounds like you think admissions should be based on how hard people think they worked relative to others.


Maybe they should be based on a range of factors that influence how successful the university thinks the candidate will be as an undergraduate? Not just exam results?


It means I think admissions officers sometimes know there’s more to a human than their raw test scores. They likely also know that a decent result at some schools requires more work than a great result at others.

I’ve met smart people who do poorly on exams. I’ve met dumb people who do well on them.


I'm not sure if they meet the requirements for being a terrorist group or if I agree with them being considered terrorists, but I just want to point out the name of the organisation isn't a valid argument in favour of them, the actions of the organisation matter a lot more than the name, for example on many occasions they've used violence to prevent people from political speech (is that antifascism or fascism?)


It's not an organization, it's a grass roots movement.

However, I agree with you in a sense, in that movements with names are inherently vulnerable to cooptation and suppression.


"'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.


Every time I bite my tongue (literal not figurative) it's also astonishing to me. Last time I did was probably 3 years ago and it was probably 10 years earlier for the time before that. Would it be fair to call me a negligent eater? Have you been walking and tripped over nothing? Humans are fallible and unless you are in an environment where the productivity loss of a rigorous checklist and routine system makes sense these mistakes happen.

It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.


Humans are imperfect and anyone can make mistakes, yes. I would argue there's different categories of mistakes though, in terms of potential outcomes and how preventable they are. A maintainer with potentially millions of users falling for a simple phishing email is both preventable and has a very bad potential outcome. I think all parties involved could have done better (the maintainer/npm/the email client/etc) to prevent this.


I feel that most everyone has some 0.0001% chance of falling for a stupid trick. And at scale, a tiny chance means someone will fall for it.


That's true but it's like saying most everyone has a small chance of crashing their car. Yet when someone crashes their car because they were texting while driving, speeding, or drunk, we justifiably blame them for it instead of calling them unlucky. We can blame them because there are clear rules they are supposed to know for safety when driving, just as there are for electronic security. The rule for avoid phishing is called "hang up, look up, call back".


Yeah but society doesn't act as if it's an unthinkable event we never planned for when a car crash happens. Blame someone or don't, but there are going to be emergency responders used to dealing with car crashes coming, because we know that car crashes happen (a lot) and we need to be ready for it.


Yes of course we need to defend against scammers at multiple levels because none of them are bulletproof, so putting too much trust in individual developers also a problem here. Even if they didn't get hacked, they could have just become the hacker themselves.


This is a nice article. I'm trying to make a map save file format. I'm curious if most developers usually use abstractions over bitflags(safely hiding away the bitwise operators from being typod, etc). My main niggle is I want maximum type safety and compile time checking, ie the compiler preventing me from accessing a bit that isn't being used or mixing things up. My other concern is also backwards compatibility if I want to modify the flags. Saving 2 bits for the version?

I want a kind of tree structure of bitflag values, and I'm trying to think of a good way to do it. For example a Grass /Stone tile, maybe 4 bits for the tile type (0001 = Grass), and then from that point forward, the remaining flags depend on the tile type (Grass having the next 2 bits for the grass colour, stone having 1 bit for whether it's cracked or not, etc), but in a safe and efficient abstraction where I can't accidentally mix them up. I don't want a pirate software monstrosity where I can't keep track of the different combinations.


Thank you!

Regarding your case, I'm pretty sure that such type safety can be achieved. It is depending on the ecosystem you're working but if its C++ I would go for CRTP implementation with heavy load of template meta programming - like defining TYPE_ID=1 for grass, TYPE_ID=2 for stone and creating BaseTile -> GrassTile/StoneTile with polymorphism on std::variant.

I think this is not exactly the tree structure, but your minimal case can be achieved by assigning different "concern" to different set of bits in the number. For instance bits 31..24 will be TYPE_ID and 23..0 will be attributes (color of the grass, is the stone cracked). However it will quickly become to small to build anything reliable, but then I would change the structure of tiles to have one non-bit flag integer for block type and the one placed sequentially next to it to the attributes bitflag (max. 31 attributes per block? should be quite enough)

You would also need to establish the whole map format - header of the file (version could possibly be stored per map and exist in header, not per tile?), chunks (per-chunk header, chunk data), optimization strategies like RLE, unique separators, serialization and deserialization, endianness auto-conversion and probably more.

Anyways, please correct me if I am wrong but compiler will help you with types only after static_asserts - as I imagine it would be useful mostly for defining interactions between blocks (which ones are allowed and which ones are not). For every other case the compiler would need to know your map at the compile time to help, and it can't be done.

In TypeScript (its a main example of language in the article, I'm not 100% sure if its possible to do such compile time checking but arktype.io would be an excellent choice for this kind of implementation. I really encourage you to check out their solutions as really complex systems can be built on top of it)


Whenever my apple wallet connects to my phone, It plays a totally useless animation that feels like it takes forever, and covers the entire screen. In that time, you cant see or do anything on the phone. So annoying, and for no reason. Just give me a little haptic when it connects.


This enrages me so fucking much.

When does my wallet slide slightly from the magnetic center and then back into place most often? When I’m getting it out of my pocket.

When am I trying to just use my goddamn phone the most? When I get it out of my pocket.

So, it ends up being that ~50% of the time I need to use my phone, I have to wait for that goddamn 3 second animation first.

If some engineer introduced a 3 second regression in the time for Face ID to unlock your phone 50% of the time, it would be noticed and fixed immediately. But call that 3 second regression a “surprise and delight animation” and suddenly Apple designers love it and force it on you.


same when attaching a locked phone to a magsafe charger. it seems like a small thing but i actually interact with my locked phone enough for that to get on my nerves. i'd prefer the haptic feedback but i would even settle for being able to swipe it away - nope. not an option.


>the individual was unable to distinguish a display name from the actual email address

This is wild to me, not just because they're a developer but they even know about SPF/DMARC. Also, the content of the email being them asking to reverify your email sounds suspicious and illogical. I know people make mistakes, but it's just crazy, and shows the importance of companies training employees to not fall for phishing emails.


Dunno, this is also a failure of email client UI which is designed around a naive world with no bad actors just so it looks cute.

The sender email address could be more prominent.

All link URLs could be visible.

Emails from new senders could have some sort of warning/alert. I used to use an email client that let you approve incoming email addresses, and it once saved me from a Coinbase phishing email since it made me double check the sender since it was marked as unapproved.

We can't keep blaming the victim when our own software works in the favor of bad actors. You're going to let your guard down one day.


Which companies are violating copyright on a massive scale? And what impact? (a bigger, badder impact sounds implied by you)


One example is basically all of the major AI players have used Annas Archives/Libgen's database to unlawfully access millions of books.


To be clear, scraping the entire internet so that ChatGPT knows what Mickey Mouse is may not be a fair use of copyright. Or to be more specific, being able to generate images of Mickey Mouse may not be legal- that is the ingestion of those images that give the model the ability to generate images of copyrighted material. I guess the courts will decide that soon-ish?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: