Exactly, if these tools are going to be so revolutionary and different within the next 6 months and even more so beyond that - there's no advantage to being an early adopter since your progress becomes invalid, may as well wait until it is good enough.
Well, yes, this is the thing. I'm somewhat sceptical that these things are going to get good enough to be useful to me on a daily basis, but if they do, then, well, I can just start using them. I'm not going to use a bad tool on the basis that it may one day metamorphose into a good tool.
> if it’s as easy as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed? They are applied deterministically inside a compiler, but often based on heuristics, and the complex interplay of optimizations in complex programs means that sometimes they will not do what you expect them to do. Sometimes they work better than expected, and sometimes worse. Sounds familiar...
> Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed?
Yes, because:
> They are applied deterministically inside a compiler
Sorry, but an LLM randomly generating the next token isn't even comparable.
> Unless you wrote the compiler, you are 100% full of it. Even then you'd be wrong sometimes
You can check the source code? What's hard to understand? If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.
Sure maybe you're limited by your personal knowledge on the compiler chain, but again complexity =/= randomness.
For the same source code, and compiler version (+ flags) you get the exact same output every time. The same cannot be said of LLMs, because they use randomness (temperature).
> LLMs are also deterministically complex, not random
What exactly is the temperature setting in your LLM doing then? If you'd like to argue pseudorandom generators our computers are using aren't random - fine, I agree. But for all practical purposes they're random, especially when you don't control the seed.
> If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.
Right, so you agree that optimization outputs not fully predictable in complex programs, and what you're actually objecting to is that LLMs aren't like compiler optimizations in the specific ways you care about, and somehow this is supposed to invalidate my argument that they are alike in the specific ways that I outlined.
I'm not interested in litigating the minutiae of this point, programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs. The outputs are generally reliable according to certain criteria, but unpredictable.
LLM models are also typically probabilistic black boxes. The outputs are also unpredictable, but also somewhat reliable according to certain criteria that you can learn through use. Where the unreliability is problematic you can often make up for their pitfalls. The need for this is dropping year over year, just as the need for assembly programming to eke out performance dropped year over year of compiler development. Whether LLMs will become as reliable as compiler optimizations remains to be seen.
> invalidate my argument that they are alike in the specific ways that I outlined
Basketballs and apples are both round, so they're the same thing right? I could eat a basketball and I can make a layup with an apple, so what's the difference?
> programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs
In reality this is at best the bottom 20% of programmers.
No programmer I've ever talked to has described compilers as probabilistic black boxes - and I'm sorry if your circle does. Unfortunately there's no use of probability and all modern compilers definitionally white boxes (open source).
You missed my point, the original comment is stating that the market for such a device doesn't exist because developers are too finicky and customisation focussed.
As a counter example - look at macbooks which are about as un-customisable as they come, but a large portion of developers use them. Meaning the market exists even if it's currently dominated by Apple (which as you/the post points out is slipping)
Having said that I do believe that many brands have way too mamy SKU and I widh they would be more opinionated on what they believe is better for their customers while maintaining clear and strong ethics (reliability should be #priority)
Yeah people don't realise this, but shame and guilt (and fear) are our 2 society building emotions. Each society has it's own mix of these, and there are also "themes" depending on which is the dominant one.
Shame has practically been thrown out the window in certain places and we can see the effects of that - people scamming each other, lying in the streets, etc. Guilt is also being eroded across the west, leading to things like rampant criminality and punishments that are less than a slap on the wrist.
Fundamentally these emotions are designed to keep us in check with the rest of the group - does this negatively affect some: yes. But at the benefit of creating high trust societies. Every time I encounter this topic I can't help but think: Don't throw the baby out with the bathwater.
+1 - I also see a huge opportunity for forgejo to become a new stackoverflow if they add federation
The primary issue with SO was that it was disconnected from the actual communities maintaining the project. A federated solution would be able to have the same network effects while handing ownership to the original community (rather than a separate SO branch of the community)
reply