I can't speak for the author, but I do often do this. IMO it's a misleading comparison though, you don't have to debug those things because rarely does the compiler output incorrect code compared to the code you provided, it's not so simple for an LLM.
Yeah this definitely falls into the category of "I use them so they feel natural", there's nothing amazing about those names.
The underlying problem is that you now run into so many named things (utilities, libraries, programs, etc.) in a day and they all have to differentiate themselves somehow. You can't name every crypto library `libcrypto` for obvious reasons.
You can, but then the names get needlessly long and one of the things we generally like (especially for command-line programs) is names that are short and easy to type. If we're going to make this argument then why not call the unix tools `concatenate`, `difference`, `stream-editor`, etc. Those are way better names in terms of telling you what they do, but from a usability standpoint they stink to type out.
Libraries and programs also have a habit of gradually changing what exactly they're about and used for. Changing their name at that point doesn't usually make sense, so you'll still end up with long names that don't actually match exactly what it does. Imagine if we were typing out `tape-archive` to make tarballs, it's a historically accurate name but gives you no hint about how people actually use it today. The name remains only because `tar` is pretty generic and there's too much inertia to change it. Honestly I'd say `cat` is the same, It's pretty rare that I see someone actually use it to concatenate multiple files rather than dump a single file to stdout.
The author is missing the fact that stuff like `libsodium` is no differently named from all the other stuff he mentioned. If he used libsodium often then he may just as well have mentioned it as well-named due to it's relation to salt and would instead be complaining about some other library name that he doesn't know much about or doesn't use often. I _understand_ why he's annoyed, but my point is that it's simply nothing new and he's just noticing it now.
Short names are a figment of the age of teletypes when you had to repeatedly type things out. This hasn't been the case for at least 3 decades. Most good shell+terminal combinations will support autocomplete, even the verbose Powershell becomes fairly easy to use with shell history and autocomplete, which, incidentally, it does very well.
If you are repeatedly typing library names, something is wrong with your workflow.
Niklaus Wirth showed us a way out of the teletype world with the Oberon text/command interface, later aped clumsily by Plan 9, but we seem to be stuck firmly in the teletype world, mainly because of Un*x.
Without looking it up, is it sodium for "salt"? That's about as tethered to the actual use (salt + hash being a common crypto thing) as any of the names in the root comment
I think the problem is that if they're in the road their liability and required smarts go up a lot. Right now it sounds like they're at least partially relying on being the largest thing on the "road" and everyone else will naturally get out of their way.
In this case the problem is that the fed is the one who runs the TSA and created the Real ID rules, but the states are the ones that actually issue the IDs meeting those rules. The fed couldn't force the states to implement the rules and the states didn't want to spend money on something they didn't really care about.
Of course, they didn't really care about it because it's mostly just security theater and thus the fed was never going to start turning people away simply for not having a compliant ID (which is still true). If there were much more valid reasons for why everybody needs to have a Real ID then states would have put more effort into getting everybody to have one.
There's also the separate the issue that the Real ID rules are questionable and it's not always easy for someone to get a Real ID even if they want one.
It's a single number in that if you take an IQ test one time you get one number, but that doesn't mean you'll get that exact number every single time you take an IQ test. Even ignoring more complex questions about them, your score on an IQ test will vary depending on simple things like how tired you are when you take it, so in practice there's some variance and you do not always get the same number every time you take a test.
Well based on the paragraphs in the README it's not actually being updated anymore, it only reflects SteamOS as of August and the author quit running their process to update it.
What prevents a farmer from simply switching back to the non-GMO seeds if the GMO option goes up in price? Or even ignoring that, switching to a different cheaper GMO seed from a different company?
I think that's the piece I and others are missing, isn't it ultimately a question of which seeds will make the farmer the most money? If a particular GMO seed suddenly become so expensive that either non-GMO or other GMO seeds are more cost-effective, why can't they just start using them instead?
Not really - if the market price for a crop is such that it depends on the greater volume which can be produced by GMO seeds, switching to non-GMO seeds becomes uneconomic.
Let's say GMO crops gives you a grain yield of 1-ton/acre and that non-GMO crops gives you a yield of 0.5-ton/acre. Now the market price is say set at $100/ton. This cuts down their earnings by half in the best case, all other inputs remaining the same.
Now if the GMO-seeds are controlled by a foreign entity, your entire agri output becomes dependent on that foreign entity not behaving badly. Whichever nation that controls the entity who owns the GMO-seed now has leverage over you.
So no, it isn't as simple as "switch back to using non-GMO seeds". This has to be carefully considered before adopting GMO-seeds.
"Bugfixes" doesn't mean the code actually got better, it just means someone attempted to fix a bug. I've seen plenty of people make code worse and more buggy by trying to fix a bug, and also plenty of old "maintained" code that still has tons of bugs because it started from the wrong foundation and everyone kept bolting on fixes around the bad part.
One of frustrating truths about software is that it can be terrible and riddled with bugs but if you just keep patching enough bugs and use it the same way every time it eventually becomes reliable software ... as long as the user never does anything new and no-one pokes the source with a stick.
I much prefer the alternative where it's written in a manner where you can almost prove it's bug free by comprehensively unit testing the parts.
It's a difference of whether the function arguments are declared or not. If you declare a `void foo()`, and then call `foo((float)f)`, the `foo()` function is actually passed a `double` as the first argument rather than a `float`. If you instead change the declaration to `void foo(float)` then it gets passed as a `float`.
> I'd go read the original PR and the discussion that took place.
Until your company switches code repos multiple times and all the PR history is gone or hard/impossible to track down.
I will say, I don't usually make people clean up their commits and also usually recommend squashing PRs for any teams that aren't comfortable with `git`. When people do take the time to make a sensible commit history (when a PR warrants more than one commit) it makes looking back through their code history to understand what was going on 1000% easier. It also forces people to actually look over all of their changes, which is something I find a lot of people don't bother to do and their code quality suffers a lot as a result.
Bisecting on squashed commits is not usually helpful. You can still narrow down to the offending commit that introduced the error but… it’s somewhere in +1200/-325 lines of change. Good luck.
This sounds unattainable to me. For code bases in the 2 million or more lines range, something as simple as refactoring the name of a poorly named base class can hit 5000 lines. It also was not a mistake with the original name it had, but you'd still like to change it to make it more readable given the evolution of the codebase. You would not split that up into multiple commits because that would be a mess and it would not compile unless done in one commit.
Such PR's shouldn't be the norm but the exception. What happens way more often is that such refactoring happen in addition to other criteria on the same PR. In high-functioning teams i've worked this is usually done as a separate PR / change, as they are aware of the complexity this operation adds by mixing it with scope-related changes and that refactoring shouldn't be in the scope of the original change.
I can't speak for the author, but I do often do this. IMO it's a misleading comparison though, you don't have to debug those things because rarely does the compiler output incorrect code compared to the code you provided, it's not so simple for an LLM.
reply