> Any answer other than "that was the most likely next token given the context" is untrue.
"Because the matrix math resulted in the set of tokens that produced the output". "Because the machine code driving the hosting devices produced the output you saw". "Because the combination of silicon traces and charges on the chips at that exact moment resulted in the output". "Because my neurons fired in a particular order/combination".
I don't see how your statement is any more useful. If an LLM has access to reasoning traces it can realistically waddle down the CoT and figure out where it took a wrong turn.
Just like a human does with memories in context - does't mean that's the full story - your decision making is very subconscious and nonverbal - you might not be aware of it, but any reasoning you give to explain why you did something is bound to be an incomplete story, created by your brain to explain what happened based on what it knows - but there's hidden state it doesn't have access to. And yet we ask that question constantly.
76k gross per year in Germany is basically the same as that. 100k gross comes out to about 5.5k net per month. The big question is how much is already covered once you're down to the net pay.
I'm not sure of the situation for software engineers but ones on the aerospace and mechanical side working in aerospace in Europe are paid something on the order of 50% less than ones in the US. I always assumed it's just a supply demand problem but I haven't run the numbers.
If you want to go further into bringing other stuff in I would say, on average, the European folks are only slightly worse off money wise (owning a house there does seem harder overall) but with more security, time off, etc.
In the US there is a much broader range of experiences in the sector, partly because of personal circumstances (student and auto loans being the biggest) and alot because of where you live, as pay tends not to scale with COL. So someone could live like a king in rural Iowa or a pauper in Los Angeles doing the same job.
> Vibe coding would be catastrophic here. Not because the AI can't write the code - it usually can - but because the failure mode is invisible. A hallucinated edge case in a tax calculation doesn't throw an error. It just produces a slightly wrong number that gets posted to a real accounting platform and nobody notices until the accountant does their review.
How is that different from handwritten code ? Sounds like stuff you deal with architecturally (auditable/with review/rollback) and with tests.
It’s shocking to me that people even ask this type of question. How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human that will logic through things and find the correct answer.
Because I've seen the results ? Failure mode of LLMs are unintuitive, and the ability to grasp the big picture is limited (by context mostly I'd say), but I find CC to follow instructions better than 80% of people I've worked with. And the amount of mental stamina it would take to grok that much context even when you know the system vs. what these systems can do in minutes.
As for the hallucinations - you're there to keep the system grounded. Well the compiler is, then tests, then you. It works surprisingly well if you monitor the process and don't let LLM wander off when it gets confused.
Because humans also make stupid random mistakes, and if your test suite and defensive practices don't catch it, the only difference is the rate of errors.
It may be that you've done the risk management, and deemed the risk acceptable (accepting the risk, in risk management terms) with human developers and that vibecoding changes the maths.
But that is still an admission that your test suite has gaping holes. If that's been allowed to happen consciously, recorded in your risk register, and you all understand the consequences, that can be entirely fine.
But the problem then isn't reflecting a problem with vibe coding, but a risk management choice you made to paper over test suite holes with an assumed level of human dilligence.
> How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human...
Your claim here is that humans can't hallucinate something random. Clearly they can and do.
> ... that will logic through things and find the correct answer.
But humans do not find the correct answer 100% of the time.
The way that we address human fallibility is to create a system that does not accept the input of a single human as "truth". Even these systems only achieve "very high probability" but not 100% correctness. We can employ these same systems with AI.
Almost all current software engineering practices and projects rely on humans doing ongoing "informal" verification. The engineers' knowledge is integral part of it and using LLMs exposes this "vulnerability" (if you want to call it that). Making LLMs usable would require such a degree of formalization (of which integration and end-to-end tests are a part), that entire software categories would become unviable. Nobody would pay for an accounting suite that cost 10-20x more.
Which interestingly is the meat of this article. The key points aren’t that “vibe coding is bad” but that the design and experience of these tools is actively blinding and seductive in a way that impairs ability to judge effectiveness.
Basically, instead of developers developing, they've been half-elevated to the management class where they manage really dumb but really fast interns (LLM's).
But they dont get the management pay, and they are 100% responsible for the LLMs under them. Whereas real managers get paid more and can lay blame and fire people under them.
Humans who fail to do so find the list of tasks they’re allowed to do suddenly curtailed. I’m sure there is a degree of this with LLMs but the fanboys haven’t started admitting it yet.
> It’s shocking to me that people even ask this type of question. How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human that will logic through things and find the correct answer.
I would like to work with the humans you describe who, implicitly from your description, don't hallucinate something random when they don't know the answer.
I mean, I only recently finished dealing with around 18 months of an entire customer service department full of people who couldn't comprehend that they'd put a non-existent postal address and the wrong person on the bills they were sending, and this was therefore their own fault the bills weren't getting paid, and that other people in their own team had already admitted this, apologised to me, promised they'd fixed it, while actually still continuing to send letters to the same non-existent address.
Don't get me wrong, I'm not saying AI is magic (at best it's just one more pair of eyes no matter how many models you use), but humans are also not magic.
Humans are accountable to each other. Humans can be shamed in a code review and reprimanded and threatened with consequences for sloppy work. Most,
humans once reprimanded , will not make the same kind of mistake twice.
> Humans can be shamed in a code review and reprimanded and threatened with consequences for sloppy work.
I had to not merely threaten to involve the Ombudsman, but actually involve the Ombudsman.
That was after I had already escalated several times and gotten as far as raising it with the Data Protection Officer of their parent company.
> Most, humans once reprimanded , will not make the same kind of mistake twice.
To quote myself:
other people in their own team had already admitted this, apologised to me, promised they'd fixed it, while actually still continuing to send letters to the same non-existent address.
> How do you not see the difference between a machine that will hallucinate something random if it doesn’t know the answer vs a human that will logic through things and find the correct answer.
I see this argument over and over agin when it comes to LLMs and vibe coding. I find it a laughable one having worked in software for 20 years. I am 100% certain the humans are just as capable if not better than LLMs at generating spaghetti code, bugs, and nonsensical errors.
It's shocking to me that people make this claim as if humans, especially in some legacy accounting system, would somehow be much better at (1) recognizing their mistakes, and (2) even when they don't, not fudge-fingering their implementation. Like the criticisms of agents are valid, but the incredulity that they will ever be used in production or high risk systems to me is just as incredible. Of course they will -- where is Opus 4.6 compared to Sonnet 4? We've hit an inflection point where replacing hand coding with an agent and interacting only via prompt is not only doable, highly skilled people are already routinely doing it. Companies are already _requiring_ that people do it. We will then hit an inflection point at some time soon where the incredulity at using agents even in the highest stakes application will age really really poorly. Let's see!
Your point is the speculative one, though. We know humans can and have built incredibly complex and reliable systems. We do not have the same level of proof for LLMs.
Claims like your should wait at least 2-3 years, if not 5.
That is also speculative. Well let's just wait and see :) but the writing is on the wall. If your criticism is where we're at _now_ and whether or not _today_ you should be vibe coding in highly complex systems I would say: why not? as long as you hold that code to the same standard as human written code, what is the problem? If you say "well reviews don't catch everything" ok but the same is true for humans. Yes large teams of people (and maybe smaller teams of highly skilled people) have built wonderfully complex systems far out of reach of today's coding agents. But your median programmer is not going to be able to do that.
Your comment is shocking to me. AI coding works. I have seen it with my own eyes last week and today.
I can therefore only assume that you have not coded with the latest models. If you experiences are with GPT 4o or earlier all you have only used the mini or light models, then I can totally understand where you’re coming from. Those models can do a lot, but they aren’t good enough to run on their own.
The latest models absolutely are I have seen it with my own eyes. Ai moves fast.
I think the point he is trying to make is that you can't outsource your thinking to a automated process and also trust it to make the right decisions at the same time.
In places where a number, fraction, or a non binary outcome is involved there is an aspect of growing the code base with time and human knowledge/failure.
You could argue that speed of writing code isn't everything, many times being correct and stable likely is more important. For eg- A banking app, doesn't have be written and shipped fast. But it has to be done right. ECG machines, money, meat space safety automation all come under this.
Replace LLM with employee in your argument - what changes ? Unless everyone at your workplace owns the system they are working on - this is a very high bar and maybe 50% of devs I've worked with are capable of owning a piece of non trivial code, especially if they didn't write it.
Realiy is you don't solve these problems by to relying on everyone to be perfect - everyone slips up - to achieve results consistently you need process/systems to assure quality.
Safety critical system should be even better equipped to adopt this because they already have the systems to promote correct outputs.
The problem is those systems weren't built for LLMs specifically so the unexpected failure cases and the volume might not be a perfect fit - but then you work on adapting the quality control system.
>>replace LLM with employee in your argument - what changes ?
I mentioned this part in my comment. You cannot trust an automated process to a thing, and expect the same process to verify if it did it right. This is with regards to any automated process, not just code.
This is not the same as manufacturing, as in manufacturing you make the same part thousands of times. In code the automated process makes a specific customised thing only once, and it has to be right.
>>The problem is those systems weren't built for LLMs specifically so the unexpected failure cases ...
We are not talking of failures. There is a space between success and failure where the LLM can go into easily.
That's not what I get out of the comment you are replying to.
In the case being discussed here, one of code matching the tax code, perfection is likely possible; perfection is defined by the tax code. The SME on this should be writing the tests that demonstrate adhering with the tax code. Once they do that, then it doesn't matter if they, or the AI, or a one shot consultant write it, as far as correctness goes.
If the resulting AI code has subtle bugs in it that pass the test, the SME likely didn't understand the corner cases of this part of the tax code as well as they thought, and quite possibly could have run into the same bugs.
That's what I get out of what you are replying to.
With handwritten code, the humans know what they don’t know. If you want some constants or some formula, you don’t invent or guess it, you ask the domain expert.
Let's put it this way: the human author is capable of doing so. The LLM is not. You can cultivate the human to learn to think in this way. You can for a brief period coerce an LLM to do so.
Humans make such mistakes slowly. It's much harder to catch the "drift" introduced by LLM because it happens so quickly and silently. By the time you notice something is wrong, it has already become the foundation for more code. You are then looking at a full rewrite.
The rate of the mistakes versus the rate of consumers and testers finding them was a ratio we could deal with and we don’t have the facilities to deal with the new ratio.
It is likely over time that AI code will necessitate the use of more elaborate canary systems that increase the cost per feature quite considerably. Particularly for small and mid sized orgs where those costs are difficult to amortize.
Rendering in the browser has nothing to do with being able to do remote editing like you can in VSCode - you would just be able to edit files accessible to the browser.
Just like you can hook up local VS code native up to a random server via SSH, browser rendering is just a convenience for client distribution.
You would need a full client/server editor architecture that VS code has.
I wonder if we'll get to "VI for LLMs" - if the model was trained on using that kind of text navigation and you show context around cursor when it navigates.
Would also be worth having special tokens for this kind of navigation.
I had the same thought too. It's probably not too difficult to fine-tune a small model for it using the "introduce a random mutation and describe the issue" workflow from TFA
IMHO D just missed the mark with the GC in core. It was released in a time where a replacement for C++ was sorely needed, and it tried to position itself as that (obvious from the name).
But by including the GC/runtime it went into a category with C# and Java which are much better options if you're fine with shipping a runtime and GC. Eventually Go showed up to crowd out this space even further.
Meanwhile in the C/C++ replacement camp there was nothing credible until Rust showed up, and nowadays I think Zig is what D wanted to be with more momentum behind it.
Still kind of salty about the directions they took because we could have had a viable C++ alternative way earlier - I remember getting excited about the language a lifetime ago :D
I'd rather say that the GC is the superpower of the language. It allows you to quickly prototype without focusing too much on performance, but it also allows you to come back to the exact same piece of code and rewrite it using malloc at any time. C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
> C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
C# C interop is pretty smooth, Java is a different story. The fact that C# is becoming the GC language in game dev is proving my point.
>Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision to fit into the use-cases they should have targeted from the start in my opinion.
Look D was an OK language but it had no corporate backing and there was no case where it was "the only good solution". If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better adoption.
True, but you still need to either generate or manually write the bindings. In D, you just import the C headers directly without depending on the bindings' maintainers.
> If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better
Any D compiler is literally also a C compiler. I sincerely don't know how can one be more C compatible than that.
> Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision
I think that it was more of an attempt to appease folks who won't use GC even with a gun to their head.
I'm not saying D didn't have nice features - but if D/C#/Java are valid options I'm never picking D - language benefits cannot outweigh the ecosystem/support behind those two. Go picked a niche with backend plumbing and got Google backing to push it through.
Meanwhile look at how popular Zig is getting 2 decades later. Why is that not D ? D also has comp-time and had it for over a decade I think ? Zig proves there's a need that D was in the perfect spot to fill if it did not make the GC decision - and we could have had 2 decades of software written in D instead of C++ :)
> D was in the perfect spot to fill if it did not make the GC decision
I just find it hard to believe that the GC is the one big wart that pushed everyone away from the language. To me, the GC combined with the full power of a systems language are the killer features that made me stick to D. The language is not perfect and has bad parts too, but I really don't see the GC as one of them.
Its not the GC, its that D has no direction. Its kitchen sink of features and the optionality just fragments the ecosystem (betterC, gc) etc, making reusing code hard.
regarding kitchen-sink-ness it's at least nowhere near as bad as C++, but that bar is basically below the ground anyway so it's not much to write home about.
go had a similar early trajectory where c++ programmers rejected it due to the gc. it gained traction among python/ruby/javascript programmers who appreciated the speed boost and being able to ship a single static binary.
You never get a second chance at making a good first impression.
I believe that many people that gladly use Rust or Zig or Go nowadays would be quite happy with D if they were willing to give it a fair evaluation. But I still often find people going "D? I would never use a language where the ecosystem is split between different standard libraries"/"D? No thanks, I prefer compilers that are open source" or similar outdated claims. These things have not been true for a long time, but once they are stuck in the heads of the people, it is over. And these claims spread to other people and get stuck there.
If you do not want to use a GC, it is trivial to avoid it and still be able to use a large chunk of the ecosystem. But often avoiding GC at all costs is not even necessary - you mostly want to avoid it in specific spots. Even many games today are written with tasteful usage of GC.
The one thing that really is a fair disadvantage for D is its small community. And the community is small because the community is too small (chicken/egg) and many believe in claims that have not been true for a long time ...
> You never get a second chance at making a good first impression.
There's a good number of younger programmers like myself who've never heard of D, say, before 2017 when those false claims were still true. Our first impression of D comes from its state today, which is not that far behind from other emerging languages.
> The fact that C# is becoming the GC language in game dev is proving my point.
That is just the Unity effect. Godot adopted C# because they get paid to do so by Microsoft.
C# allows for far lees control over the garbage collection compared to D. The decision to use C# is partly responsible for the bad reputation of Unity games as it causes a lot of stutters when people are not very careful about how to manage the memory.
The creator of the Mono runtime actually calls using C# his Multi-million dollar mistake and instead works on swift bindings for Godot: https://www.youtube.com/watch?v=tzt36EGKEZo
C# wouldn't be a problem for Unity if they hadn't mapped most engine abstractions to class hierarchies with reflection-based dispatch instead of value-type handles and the seldom interface, and had dropped the Boehm GC. .NET has actually got a lot of features to avoid allocations on the hot paths.
I agree Mono is bad compared to upstream .NET, but I used to write game prototypes with it back before .NET Core without as many performance issues as I still find in Unity. It was doable with a different mindset.
AFAIK, the original reason to build IL2CPP was to appease console certification and leave behind Mono's quirky AOT on iOS. Capcom is also using their own C# implementation targeting C++ for console certification.
Allegedly some games are now managing to ship on console with ports of .NET NativeAOT.
> The fact that C# is becoming the GC language in game dev is proving my point.
Respectfully, it doesn't prove your point. Unity is a commercial product that employed C# because they could sell it easily, not because it's the best language for game dev.
Godot supports C# because Microsoft sponsored the maintainers precisely on that condition.
dplug is a framework for building audio plugins that do realtime signal processing, and its creator has produced several well-sold plugins under the "Auburn Sounds" studio name.
Interesting. That might fit the bill, though I am not completely sure.
Do you happen to know why D has not been accepted into the benchmarks games at debian.net? I heard that D developers contributed D code, but that D was never accepted.
D by definition meets the FFmpeg's criteria because it's also a C compiler. Because of that I never wondered how D performs in the benchmarks, as I know for sure that it can give me the performance of C where I need it.
But then, to use D for performance, would I then have to master both D, C and their interaction? That doesn't seem great. It's like having to learn 2 languages and also how they interact.
No, you can just write D. It'll have the same performance as C, if you write C-like code. It might have better performance than C if you use templates (just like in C++).
Not necessarily, you can just call the functions in the C library from D as you'd call them from C or C++ with the added benefit of being able to leverage the D GC, RAII, macros etc.
Dunno about the Debian benchmarks game or their build environment. I did my own benchmarks and it was quite easy to write working D code compared to C, C++ or Rust. I used LDC, the LLVM D compliler as opposed to DMD. Dub is not that seamless compared to Cargo but given that you have to set things up manually, it doesn't encourage dependency hell.
If you're writing networking code, Go is probably a better choice than vibe.d.
You can use C but you are not forced to. In fact, you can write C and convert it automatically to D (though it will need some amount of manual editing afterwards). C is supported as a syntax option but it's still the same compiler for both under the hood. As rightly pointed out by the user above, you can write the same high performance code in the D syntax. The reverse is not true, though -- using high level concepts like classes and GC allocation is not supported in the C syntax.
I used to think of D as the same category as C# and Java, but I realized that it has two important differences. (I am much more experienced with Java/JVM than C#/.Net, so this may not all apply.)
1. Very load overhead calling of native libraries. Wrapping native libraries from Java using JNI requires quite a bit of complex code, configuring the build system, and the overhead of the calls. So, most projects only use libraries written in a JVM-language -- the integration is not nearly as widespread as seen in the Python world. The Foreign Function and Memory (FFM) API is supposed to make this a lot easier and faster. We'll see if projects start to integrate native libraries more frequently. My understanding is that foreign function calls in Go are also expensive.
2. Doesn't require a VM. Java and C# require a VM. D (like Go) generate native binaries.
As such, D is a really great choice when you need to write glue code around native libraries. D makes it easy, the calls are low overhead, and there isn't much need for data marshaling and un-marshaling because the data type representations are consistent. D has lower cognitive overhead, more guardrails (which are useful when quickly prototyping code), and a faster / more convenient compile-debug loop, especially wrt to C++ templates versus D generics.
Native calls from C# are MUCH better than then the Java experience. It's a massive part of why I chose it when it came out. Today, C# is pretty great... not ever MS dev shop, which is often, like Java, excessively complex for its' own sake.
On #2, I generally reach for either TS/JS with Deno if I need a bit more than a shell script, or Rust for more demanding things. I like C# okay for the work stuff that I do currently though.
what are you referring to regarding Java? I'm aware C# has AOT (and il2cpp for Unity projects) but I don't recall hearing about any sort of Java native binary that isn't just shipping a VM and java bytecode (ignoring the short-lived GNU java compiler).
Java has had AOT compilers since around 2000, they only happened to be commercial, Excelsior JET was the most famous one.
There were several vendors selling AOT compilers for embedded systems, nowadays they are concentrated into two vendors, PTC and Aicas.
Then you have the free beer compilers GraalVM and OpenJ9, which are basically the reason why companies like Excelsior JET ended up closing shop.
Also .NET has had many flavours, starting with NGEN, Mono AOT, Bartok, MDIL, .NET Native, and nowadays Native AOT.
Both ecosystems are similar to Lisp/Scheme nowadays, having a mix of JIT and AOT toolchains, each with its plus and minus, allowing the developers to pick and choose the best approach for their deployment scenario.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
Do you want GC? Great! Do not want GC? Well, you can turn it off, and lose access to most things. Do you want a borrow-checker? Great, D does that as well, though less wholeheartedly than Rust. Do you want a safer C/memory safety? There's the SafeD mode. And probably more that I forget.
I wonder if all these different (often incompatible) ways of using D ends up fragmenting the D ecosystem, and in turn make it that much harder for it to gain critical mass
> My (likely unfair) impression of D is that it feels a bit rudderless
The more positive phrasing would be that it is a very pragmatic language. And I really like this.
Currently opinionated langues are really in vogue. Yes they are easier to market but I have personally very soured on this approach now that I am a bit older.
There is not one right way to program. It is fun to use on opinionated language until you hit a problem that it doesn't cover very well and suddenly you are in a world of pain. I like languages that give me escape hatches. That allow me to program they way I want to.
>My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
This can very clearly be said about C++ as well, which may have started out as C With Classes but became very kitchen sinky. Most things that get used accrete a lot of features over time, though.
FWIW, I think "standing out" due to paradigm commitment is mostly downstream of "xyz-purity => fewer ways to do things => have to think/work more within the constraints given". This then begs various other important questions, of course.. E.g., do said constraints actually buy users things of value overcoming their costs, and if so for what user subpopulations? Most adoption is just hype-driven, though. Not claiming you said otherwise, but I also don't think the kind of standing out you're talking about correlates so well to marketing. E.g., browsers marketed Javascript (which few praised for its PLang properties in early versions).
1. Runtime: A runtime is any code that is not a direct result of compiling the program's code (i.e. it is used across different programs) that is linked, either statically or dynamically, into the executable. I remember that when I learnt C in the eighties, the book said that C isn't just a language but a rich runtime. Rust also has a rich runtime. It's true that you can write Rust in a mode without a runtime, but then you can barely even use strings, and most Rust programs use the runtime. What's different about Java (in the way it's most commonly used) isn't that it has a runtime, but that it relies on a JIT compiler included in the runtime. A JIT has pros and cons, but they're not a general feature of "a runtime".
2. GC: A garbage collector is any mechanism that automatically reuses a heap object's memory after it becomes unreachable. The two classic GC designs, reference counting and tracing, date back to the sixties, and have evolved in different ways. E.g. in the eighties and nineties there were GC designs where the compiler could either infer a non-escaping object's lifetime and statically insert a `free` or have the language track lifetimes ("regions", 1994) and have the compiler statically insert a `free` based on information annotated in the language. On the other hand, in the eighties Andrew Appel famously showed that moving tracing collectors "can be faster than stack allocation". So different GCs employ different combination of static inference and dynamic information on object reachability to optimise for different things, such as footprint or throughput. There are tradeoffs between having a GC or not, and they also exist between Rust (GC) and Zig (no GC), e.g. around arenas, but most tradeoffs are among the different GC algorithms. Java, Go, and Rust use very different GCs with different tradeoffs.
So the problem with using the terms "runtime" and "GC" colloquially as they're used today is not so much that it differs from the literature, but that it misses what the actual tradeoffs are. We can talk about the pros and cons of linking a runtime statically or dynamically, we can talk about the pros and cons of AOT vs. JIT compilation, and we can talk about the pros and cost of a refcounting/"static" GC algorithm vs a moving tracing algorithm, but talking in general about having a GC/runtime or not, even if these things mean something specific in the colloquial usage, is not very useful because it doesn't express the most relevant properties.
It's pretty obvious from the context that runtime/GC means having a runtime with a tracing GC - and the tradeoffs are well known. These discussions were played out over the last two decades - we all know GC can be fast, but there were and are plenty of use-cases where the tradeoffs are so bad that it's a non-starter.
Not to mention that writing a high quality GC is a monumental task - it took those decades for C# and Java to get decent - very few projects have the kind of backing to pull that off successfully.
In practical terms think about the complexity of enabling WASM that someone mentioned in this thread when you reuse C runtime and skip tracing GC.
I'm kind of venting in the thread to be fair, Walter Bright owes me nothing and it's his project, I had fun playing with it. I'm just sad we couldn't have gotten to Zig 20 years ago when we were that close :)
Ironically those optimizations came from .NET avoiding GC and introducing primitives to avoid it better.
And .NET is moving heavily into the AoT/pre-compilation direction for optimization reasons as well (source generators, AoT).
If you look at the change logs for the past few versions of the framework from perf perspective the most significant moves are : introduce new primitives to avoid allocating, move more logic to compile time, make AoT better and work with more frameworks.
A programming language having a GC doesn't mean every single allocation needs to be on the heap.
C# is finally at the sweet spot languages like Oberon, Modula-3 and Eiffel were on the late 90's, and unfortunely were overshadowed by Java's adoption.
Go and Swift (RC is a GC algorithm) are there as well.
D could be there as well on the mainstream, if there was a bit more steering into what they want to be, instead of having others catching up on its ideas.
That is what made me look into the language after getting Andrei Alexandrescu's book.
The point is that if you need performance you need to drop below tracing GC, and depending on your use-case, if that's the majority of your code it makes sense to use a language that's built for that kind of programming (zero cost abstractions). Writing C# that doesn't allocate is like wearing a straightjacket and the language doesn't help you much with manual memory management. Linters kind of make it more manageable but it's still a PITA. It's better than Java for sure in that regard, and it's excellent that you have the option for hot paths.
I rather have the productivity of a GC (regardless of which kind), manual allocations on system/unsafe code blocks, and value types, than going back to bare bones C style programming, unless there are constraints in place that leave no other option.
Note that even Rust's success, has triggered managed languages designers to research how far they can integrate linear, affine, effects, dependent types into their existing type systems, as how to combine the best of both worlds.
To the point that even Rust circles now there are those speaking about an higher level Rust that is supposed be more approachable.
This is essentially how I feel -- GC by default with user control over zero-copy/stackalloc behavior.
Modern .NET isn't even difficult to avoid allocations, with the Span<T> API and the work they've done to minimize unnecessary copies/allocs within the std lib.
(I say this as a Kotlin/JVM dev who watches from the sideline, so not even the biggest .NET guy around here)
That is the no longer the case in C# 14/.NET 10, D has lost 16 years counting from Andrei's book publishing date, letting other programing languages catch up to more relevant features.
You are forgetting that a language with a less mature ecosystem isn't much help.
Yes, they added AOT but it's still challenging to do anything that requires calling into the OS, because you're going to need the bindings. It will still add some overhead under the hood and more overhead will you need to add yourself to convert the data to blittable types and back.
Mixing C# with other languages in the same project is also difficult because it only supports MSBuild.
> You are forgetting that a language with a less mature ecosystem isn't much help.
> Mixing C# with other languages in the same project is also difficult because it only supports MSBuild.
No, this is not true. You can invoke the compiler directly with no direct call to MSBuild what so ever.
Even using the dotnet command, which uses MSBuild under the hood, you are free to use your own build system. As an example - this code uses a Makefile to invoke the build: https://github.com/memsom/PSPDNA
If you want to call csc directly, it will compile with args just fine. And, if you have a working C# compiler on you platform, whether or not it uses MSBuild behind the scenes is kind of inconsequential.
You may also directly call the msbuild command, and it more or less does the same thing as the dotnet command, but hardly anyone eve calls msbuild directly these days.
> Rust also has issues using anything besides cargo.
D also has its own build system but it's not the only option. Meson officially supports building D sources. You could also easily integrate D with SCons, though there's no official support.
Op saying Rust has a kind of GC is absurd. Rust keeps track of the lifetime of variables and drops them at the end of their lifecycle. If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
You see OP is trying to murk the waters when they claim C has a runtime. While there is a tiny amount of truth to that, in the sense that there’s some code you don’t write present at runtime, if that’s how you define runtime the term loses all meaning since even Assemblers insert code you don’t have to write yourself, like keeping track of offsets and so on.
Languages like Java and D have a runtime that include lots of things you don’t call yourself, like GC obviously, but also many stdlib functions that are needed and you can’t remove because it may be used internally. That’s a huge difference from inserting some code like Rust and C do.
To be fair, D does let you remove the runtime or even replace it. But it’s not easy by any means.
> If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
Except for the memory management literature, because it's interested in the actual tradeoffs of memory management. A compiler inferring lifetimes, either automatically for some objects or for most objects based on language annotations, has been part of GC research for decades now.
The distinction of working at compile time or runtime is far from huge. Working at compile time reduces the work associated with modifying the counters in a refcounting GC in many situations, but the bigger differences are between optimising for footprint or for throughput. When you mathematically model the amount of CPU spent on memory management and the heap size as functions of the allocation rate and live set size (residency), the big differences are not whether calling `free` is determined statically or not.
So you can call that GC (as is done in academic memory management research) or not (as is done in colloquial use), but that's not where the main distinction is. A refcounting algorithm, like that found in Rust's (and C++'s) runtime is such a classic GC that not calling it a GC is just confusing.
I should add that the JVM (and Go) also infers lifetime for non-escaping objects and "allocates" them in registers (which can spill to the stack; i.e. `new X()` in Java may or may not actually allocate anything in the heap). The point is that different GCs involve compiler-inferred lifetimes to varying degrees, and if there's a clear line between them is less the role of the compiler (although that's certainly an interesting detail) and more whether they generally optimise for footprint (immediate `free` when the object becomes unreachable) or throughput (compaction in a moving-tracing collector, with no notion of `free` at all).
There are also big differences between moving and non-moving tracing collectors (Go's concurrent mark & sweep and Java's now removed CMS collector). A CMS collector still has concepts that resemble malloc and free (such as free lists), but a moving one doesn't.
> But is it not easy to opt out of in C, C++, Zig and Rust, by simply not using the types that use reference counting?
In C, Zig, and C++ sure. In Rust? Not without resorting to unsafe or to architectural changes.
> And how does your performance analysis consider techniques like arenas and allocating at startup only?
Allocating at startup only in itself doesn't say much because you may be allocating internally - or not. Arenas indeed make a big difference and share some performance behaviours with moving-tracing collectors, but they can practically only be used "as god intended" in Zig.
Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)
No, defer is always optional, which makes it highly error prone.
There's errdefer, which only defers if there was an error, but presumably you meant what you wrote, and not that.
BTW, D was the first language to have defer, invented by Andrei Alexandrescu who urged Walter Bright to add it to D 2.0 ... in D it's spelled scope(exit) = defer, scope(failure) = errdefer, and scope(success) which is only run if no error.
Re: the point about Zig: Especially considering I used and played a lot with D's BetterC model when I was a student, I wonder as a language designer what Walter thinks about the development and rise in popularity of Zig. Of course, thinking "strategically" about a language's adoption comes off as Machiavellian in a crowd of tinkers/engineers, but I can't help but wonder.
FIl-C, the new memory-safe C/C++ compiler actually achieved that through introducing a GC, with that in mind I'd say D was kind of a misunderstood prodigy in retrospect.
There's two classes of programs - stuff written in C for historic reasons that could have been written in higher level language but rewrite is too expensive - fill c.
Stuff where you need low level - Rust/C++/Zig
Na, there are only three classes: stuff where you need simplicity, fast compilation times, portability, interoperability with legacy systems, or high-performance - C. Stuff where you need perfect memory safety: Fil-C. And stuff where you need a combination: C + formal verification. Not sure where I would Rust/C++/Zig (none of those offers perfect memory safety in practice)
FillC works fine with all C code no matter how low level. There’s a small performance overhead but for almost every scenario it’s an acceptable overhead!
For most apps it’s much less than that and in most cases it’s unnoticeable. I think it would be more productive if you could point out an app that has noticeably worse performance on FillC, so that the cause for that could be looked at and perhaps even fixed, so that eventually there would be neatly zero examples like that.
You did not point out any inaccuracy, you literally made baseless comment about it being "up to 5 times" slower without providing an example where that's the case.
"That's just rude" - oh my, never though asking for evidence of a performance problem on HN would be "rude". And yes, if something is actually 5x slower, it's very likely a performance bug that can be fixed, as FillC's author has made it clear in several opportunities.
It's sad that people lie here. It's commonly recognized that programs run 1.5-5 times slower.
> if something is actually 5x slower, it's very likely a performance bug that can be fixed, as FillC's author has made it clear in several opportunities.
Another lie from someone who doesn't even know what the program is called.
I continue to think a subset of D without GC, commonly known as "D as C" or Das C could have been marketed as a language of its own with specific target audience in direct competition with Zig.
I get where you're coming from and if this was a package I'd agree - but having this built in/part of the tooling is nice - one less dependency - bash isn't as ubiquitous as you assume.
They are using this to virtue signal - but in reality it's just not compatible with their businesses model.
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
Yeah you can't predict anything with 100% certainty either
By repeating propaganda at you though desperate financiers can hack your brains innate prediction loop to convince you you're knocking on the door of infamy.
Look, I get you. You're trying to fill the hole created when father never came back with cigarettes. Mom always blamed you for his leaving. But little Warboy screaming "Witness me make line go up !" everyone else is a self selecting meat suit too working unintentionally (simply distracted by their own lives needs they never encounter your pitch) and in some instances intentionally (fomenting economic and political instability) against you to support themselves.
"Because the matrix math resulted in the set of tokens that produced the output". "Because the machine code driving the hosting devices produced the output you saw". "Because the combination of silicon traces and charges on the chips at that exact moment resulted in the output". "Because my neurons fired in a particular order/combination".
I don't see how your statement is any more useful. If an LLM has access to reasoning traces it can realistically waddle down the CoT and figure out where it took a wrong turn.
Just like a human does with memories in context - does't mean that's the full story - your decision making is very subconscious and nonverbal - you might not be aware of it, but any reasoning you give to explain why you did something is bound to be an incomplete story, created by your brain to explain what happened based on what it knows - but there's hidden state it doesn't have access to. And yet we ask that question constantly.
reply