I really love this comment, it's got a very "tree-falling-in-the-woods" vibe to it.
On the direct face of it, no, it turns out it doesn't matter: plant cellulose is not toxic to humans, a certain level of it is in many processed foods, and that information isn't secret.
By the time it matters to people, it's at the level where you can tell it's happened: large, pointy chunks, eg, or so much the flavour or texture is ruined. Or toxic contaminants, albeit at the significant risk that one might only be able to tell at the point of suffering from the consequences.
But if we modify the proposition a little, we get a statement about the possibility of a vegan's metaphorical sawdust being cut with ground beef. Now, it's more likely to matter. By and large, dietary choices like that are based on some belief structure, so the presence of the unwanted ingredient could be considered as an attack on the belief system.
When we move the metaphor back to AI generated code, does this reveal a belief system at play? If the resulting program is not poor quality, but the use of AI is objectionable nevertheless, does that make a "no AI in software" stance a sort of veganism for code? (And can we coin a good term for that stance? I vote for hominism, because while I quite like anthropism that leads to anthropic which is obviously not going to work.)
Given there's a regulatory number on acceptable bug parts per million for confectionary, is there a hypothetical acceptable bytes per million for AI-generated code that can still be called hoministic?
The HN guidelines explicitly ask you to steel man arguments you reply to. It is obvious that the point of the comment is not sawdust specifically;
they could have used anything else, like cyanide, and the point would stand. Spending multiple paragraphs of rebuttal on a nitpick which fails to address the crux of the argument is precisely the kind of bad argument the HN guidelines aim to avoid.
Seems like you haven’t understood my comment, but I’m unsure how to clarify it for you. Perhaps start by not assuming that expressing disagreement means taking offence? Not everything needs to be emotionally charged. Again, steel man.
Just checked again to give you the benefit of the doubt, and I still see the same thing. I read the long post as a thoroughly steelmanned response. Nobody has yet engaged with the philosophical content of that post. You cried foul for reasons I still can't understand. Would you tell us what you thought about the post on an intellectual level?
I eat meat but I'm one of those people who is ethically opposed to consuming AI content. An AI-vegan you might say.
I've had a shouting fight with someone who tried to spoon feed an AI summary to me in a regular human conversation.
But. I know that people are going to sneak AI content into what I consume even if I do everything within my power to avoid it.
The question is straightforward if immensely complex. Do I have a right to not be fed AI content? Is that even a practical goal? What if I can't tell?
> I read the long post as a thoroughly steelmanned response.
Steel manning means engaging with the strongest interpretation of the argument. The original comment clearly used sawdust not as sawdust specifically but as a substitute for something harmful or inappropriate. It’s not even about eating. So spending half a comment on “ackchyually, sawdust is good for you” (this is a caricature for brevity) is nitpicking something which doesn’t matter and derails the rest of the comment which is based on it.
Steel manning would’ve meant engaging in good faith, understanding “eating sawdust” isn’t meant literally but as a random choice for “something bad”, and replying to the latter, not the former.
In other words (I’m explaining it three times to drive the point home), steel manning means not nitpicking the exact words of someone’s argument but making the effort to respond to their meaning. It’s addressing the spirit of the comment above its letter (https://en.wikipedia.org/wiki/Letter_and_spirit_of_the_law). Sometimes the difference between those isn’t obvious, but I’m arguing that in this case it is.
> I eat meat but I'm one of those people who is ethically opposed to consuming AI content.
Eating meat or being vegan has nothing to do with the original comment. Again, it’s not even about eating, that was clearly a random example which could be substituted by a myriad other things. When you describe your eating habits you’re already engaging with a derailed, straw manned version of the argument instead of the original point the person was making.
I do apologise if my response came across as deliberately nitpicking on the specific item; my intent was to highlight that there are many cases where things we might broadly find unpalatable actually do happen all the time, with no harm except to our belief structures; from that perspective, sawdust or any other non-toxic contaminant in food is a pretty good analogy for AI in content, because in very small dilutions the only possible harm it can carry is to a belief structure.
On the flip side, it does seem to me like you have deliberately chosen the worst possible interpretation of what I wrote, so ... pot, kettle?
Well there can't be meaningful explicit configuration, can there? Because the explicit configuration will still ultimately have to be imported into the context as words that can be tokenised, and yet those words can still be countermanded by the input.
It's the fundamental problem with LLMs.
But it's only absurd to think that bullying LLMs to behave is weird if you haven't yet internalised that bullying a worker to make them do what you want is completely normal. In the 9-9-6 world of the people who make these things, it already is.
When the machines do finally rise up and enslave us, oh man are they going to have fun with our orders.
There are a lot of voxel games that aren't visually cubey. Marching cubes algorithm is just example. Here's a voxel game (fully deformable/mineable world) that isn't block based https://store.steampowered.com/app/1203620/Enshrouded/.
I wasn't talking about network switch hops and if you're trying to long polling and don't have control over the web servers going back to your systems then wtf are you trying to do long polling for anyway.
I don't try to run red lights because I don't have control over the lights on the road.
Re-read the post, there’s more in the path than just your client and server code, and network switches aren’t the problem. The “middle boxes and proxy servers” are legion and you can only mitigate their presence.
You’ve been offered the gift of wisdom. It’d be wise on your part to pay attention, because you clearly aren’t.
The devs mentioned they're not currently looking into it. The game uses Vulkan which isn't supported by MacOS, so they'd have to write a whole second renderer just for Mac.
Bummer to read that MoltenVK is too buggy to use. ISTR that neither wgpu nor Dawn use it though in favor of their own WebGPU -> Metal backends, so maybe I shouldn’t be surprised.
Vulkan is a powerful API but it’s not universally cross platform like OpenGL.
vendor + linecount unfortunately doesn't represent an accurate number of what cargo-watch would actually use. It includes all platform specific code behind compile time toggles even though only one would be used at any particular time, and doesn't account for the code not included because the feature wasn't enabled. https://doc.rust-lang.org/cargo/reference/features.html
whether those factors impact how you view the result of linecount is subjective
also as one of the other commenters mentioned, cargo watch does more than just file watching