Hacker Newsnew | past | comments | ask | show | jobs | submit | torginus's commentslogin

This is just a theory of mine, but the fact that people don't see LLMs as something that will grow the pie and increase their output leading to prosperity for all just means that real economic growth has stagnated.

From all my interactions with C-level people as an engineer, what I learned from their mindset is their primary focus is growing their business - market entry, bringing out new products, new revenue streams.

As an engineer I really love optimizing out current infra, bringing out tools and improved workflows, which many of my colleagues have considered a godsend, but it seems from a C-level perspective, it's just a minor nice-to-have.

While I don't necessarily agree with their world-view, some part of it is undeniable - you can easily build an IT company with very high margins - say 3x revenue/expense ratio, in this case growing the profit is a much more lucrative way of growing the company.


I think AI has no moral compass, and optimization algorithms tend to be able to find 'glitches' in the system where great reward can be reaped for little cost - like a neural net trained to play Mario Kart will eventually find all the places where it can glitch trough walls.

After all, its only goal is to minimize it cost function.

I think that behavior is often found in code generated by AI (and real devs as well) - it finds a fix for a bug by special casing that one buggy codepath, fixing the issue, while keeping the rest of the tests green - but it doesn't really ask the deep question of why that codepath was buggy in the first place (often it's not - something else is feeding it faulty inputs).

These agentic AI generated software projects tend to be full of these vestigial modules that the AI tried to implement, then disabled, unable to make it work, also quick and dirty fixes like reimplementing the same parsing code every time it needs it, etc.

An 'aligned' AI in my interpretation not only understands the task in the full extent, but understands what a safe and robust, and well-engineered implementation might look like. For however powerful it is, it refrains from using these hacky solutions, and would rather give up than resort to them.


> meat waves

sigh


>> meat waves

> sigh

I agree it's an unfortunate way of presenting it but Stalin was a guy who didn't care for anyone's life and was literally sending people in waves to die with the NKVD sadists shooting scared young boys whenever they tried to run away from the horror of war. Dead if you go, dead if you don't. It's romanticized an celebrated today, but it was a mass tragedy incomparable to one that happened to ny other country in recent history.


We can only hope reddit shares the same fate. Its only saving grace - as much as it pains me to say it - is that it's still not Facebook

.NET Framework is a Windows system component, but it's been deprecated and Microsoft is moving away from it. The modern .NET ships as either a downloadable runtime/SDK, or can be bundled with your application, meaning your app has no dependencies.

I wonder if this was the true message of the Claude C compiler - not that Claude can clone a compiler, but that Anthropic has built a harness, which it can use to observe the behavior and outputs of an existing program (GCC) - it can recreate it with a good enough (at least functioning) replica.

My 2 cents: I am not an experienced React dev, but the React compiler came out recently with React 19, which is supposed to do the same thing as Svelte's - eliminate unnecessary DOM modifications by explicitly tracking which components rely on what state - thus making useMemo() unnecessary.

Since the article still references useMemo(), I wonder how up-to-date the rest of the article is.


React has always been tracking what component relies on what state, independent of the compiler. That is one of the reason the rule-of-hook has to exist - it tracks if a component calls useState() and thus knows the according setState function that manipulates that particular state.

Idk why people claim React is bloat, especially since you can switch to Preact (4kb) most of the time without changes if filesize is an issue for you.


> Idk why people claim React is bloat

Because it's very hard to control it's rendering model. And the fact that multi billion dollar startups and multi trillion dollar companies hiring ivy league charlatans still have bloated low-performing websites written in react (that don't even need react...) clearly states the issues aren't that trivial.

> React has always been tracking what component relies on what state, independent of the compiler.

This still needs a complete parsing and analysis of your components and expressions. Which is why there is no single very performing UI library that can avoid directives.


As someone that spend many days on performance I will tell you that bundle size have minimal impact on performance. In most cases backend dominate any performance discussion.

Apps are slow because caching is missing, naive pagination, poorly written API calls waterfall, missing db indexes, bad data model, database choice or slow serverless environment. Extra 1MB of bundle add maybe 100ms to one time initial load of app that can mitigated trivially with code splitting.


I didn't mention bundle size at all. We're focusing on rendering.

You can write slow, unperforming code in whatever framework and language you like. React is not particular slow, nor particularly bloated compare to competition.

Because people make shitty React apps and people think React is slow because of it.

It's definitely not slow here. It's within 20% of Svelte. That's not nothing, but it's not the huge monster people claim it is

https://krausest.github.io/js-framework-benchmark/2025/table...


When 99% of React apps (websites tbh, with very few needs for complex reactivity...) including those built by multi billion/trillion $ companies, struggle to produce non-bloated, non-slow, accessible websites, you know that the library is simply too complex to tame.

Of course React can be performant, but still, you're getting a PhD in its internals and hooks to get what you have in vue/nuxt/svelte whatever out of the box.


It does not matter what framework you chose, if you carelessly throw money at something it is not guaranteed to be good or fast. A framework doesn't replace company culture.

Plus, in my experience, PhDs do not make good developers - they often fall into the trap of thinking too abstract vs. being pragmatic.


I say all of this as someone that would rather use Svelte, and doesn't really care what's going on in the React ecosystem. This is purely about its raw performance.

> you're getting a PhD in its internals and hooks to get what you have in vue/nuxt/svelte whatever out of the box.

You can look at React, Svelte, and Vue benchmarks here: https://krausest.github.io/js-framework-benchmark/2025/table...

Outside of Swap Rows, React is like 15% slower than Vue, 30% slower than Svelte. That's not nothing, but it's definitely not as catastrophic as you're implying. And these aren't hyper optimized benchmarks, these are pretty naive implementations.

You know what is actually slow? Alpine JS. It's slow as fuck. But people talk about it like it's some super lean framework.

> When 99% of React apps (websites tbh, with very few needs for complex reactivity...) including those built by multi billion/trillion $ companies, struggle to produce non-bloated, non-slow, accessible websites, you know that the library is simply too complex to tame.

Ignoring the nonsense 99% figure, I don't think entire sites should be React. I am talking about its rendering because that's what all of your comments are about. Not whether an entire site should be generated with just JS.

I can't explain to you what tomfoolery billion dollar companies conduct to get such garbage. I can tell you it's not unique to React. Apple Music is dreadful, and it's Svelte. There were plenty of slow JQuery websites. You are seeing more bad React websites because you are seeing more React websites in general. It's the PHP problem.


useMemo is for maintaining calculations of dependencies between renders, renders generally caused by state changes. React always tries to track state changes, but complicated states (represented by deep objects) can be recalculated too often - not sure if React 19 improves this, don't think so when reading documentation.

on edit: often useMemo is used by devs to cover up mistakes in rendering architecture. I believe React 19 improves stuff so you would not use useMemo, but not sure if it actually makes useMemo not at all useful.


React compiler adds useMemo everywhere, even to your returned templates. It makes useMemo the most common hook in your codebase, and thus very necessary. Just not as necessary to write manually.

What we did for out build agents was to just install the required version of build tools via chocolatey. But cool approach!

Nowadays you can also use winget for it.

Same. Choco solves this with a one-liner for me.

Note that this also doesn't work on Linux - your system's package manager probably has no idea how to install and handle having multiple versions of packages and headers.

That's why docker build environments are a thing - even on Windows.

Build scripts are complex, and even though I'm pretty sure VS offers pretty good support for having multiple SDK versions at the same time (that I've used), it only takes a single script that wasn't written with versioning in mind, to break the whole build.


> Note that this also doesn't work on Linux - your system's package manager probably has no idea how to install and handle having multiple versions of packages and headers.

But this isn’t true. Many distros package major versions of GCC/LLVM as separate packages, so you install and use more than one version in parallel, no Docker/etc required

It can indeed be true for some things-such as the C library-but often not for the compilers


The closest thing I saw to this was some vendors shipping their SDKs with half the desktop userland (in a similar 'blob' fashion the post complains about), with shell scripts setting up paths so that their libs and tools are found before system ones.

To give a concrete example of what I was talking about, RHEL has “gcc-toolset” for installing multiple GCC versions in parallel:

https://developers.redhat.com/articles/2025/04/16/gcc-and-gc...


this seems to be the same approach I saw with other SDKs (for example Qt), which I wrote about in my previous post - the official versions ship half the userland dependencies in a directory under /opt/

and use some scripts (chroot or LD_LIBRARY_PATH maybe, not an expert) to create a separate environment for the given toolset.


In RHEL8/9, there is a command "scl" which works by updating the PATH... so you can run "scl enable gcc-toolset-9 bash" to start a new bash shell using GCC 9.

In RHEL10, instead of "scl", each toolset has an independent command (actually just a shell script) like "gcc-toolset-9-env bash" which does the same thing

chroot or LD_LIBRARY_PATH isn't necessary, changing PATH is enough

Or in fact – this isn't RHEL scl system, but some other distros – some distros install alternative compilers using a prefix, so you just do `CC=PREFIX-gcc` (assuming you have a Makefile configured with the standard conventions)

e.g. for hobbyist/recreational reasons, I have done MS-DOS software development under Linux before. The DOS cross-compiler gets installed as "i586-pc-msdosdjgpp-gcc" so I just do "CC=i586-pc-msdosdjgpp-gcc make" to build my project for DOS instead of Linux.

Similarly, different clang versions are often installed with a version suffix. So in another project I had, I'd do "CC=clang-11 make" when I wanted to use clang 11 and "CC=clang-15 make" when I wanted to use clang 15. (You can tell from the version numbers I haven't touched that hobby project of mine for quite a while now.)


Until the day there is that symlink, or environment variable with the incorrect value.

Guix and Nix can do that. They were built from the ground up to be able to have multiple, simultaneous versions of everything installed in separate store, and you can request on a per shell basis what you want in your environment.

Tbh, I think Windows' stable ABI/API at all levels + excellent docs is something the open source community could learn from.

Software that is poorly documented, has no stable api, and no way of accepting extensions other than modifying the source, is not very 'free', in the sense, that if I have a good idea, I'll have crazy amounts of trouble getting the change to users even if I manage to decipher and modify the source appropriately.

This approach describes many software projects. If I wanted to change GCC in a way the original authors didn't approve of, I'd have to fork the entire project and lobby the maintainers to include my versions.

If I wanted to change most MS stuff, like Office, I could grab the COM object out of a registry, and override it with mine, just to list one example. Then I could just put the binary on my website, and others would be able to use it.

As for MS not publishing these particular docs - its not like MS doesnt publish tomes on everything, including the kernel. If you're interested in Windows Internals, I recommend the book series of the same name by Pavel Yosifovich, or really anything by him or Mark Rusinovich.

Its also a testament to the solidity of Windows' design that tons of stuff written decades ago is still relevant today.


> If I wanted to change most MS stuff, like Office, I could grab the COM object out of a registry, and override it with mine

This goes back a very long time -- at least to the Windows 3.0 timeframe.

The IBM-only 32-bit OS/2 2.0 came out around the same time as Windows 3.1.

OS/2 2 could run Windows 3 inside what was effectively a VM (a decade before true VMs came to the x86 platform), running the Microsoft code on top of OS/2's built-in DOS emulator.

I remember an IBM person objecting to a journalist saying "so you have access to the Windows source code, and you patch it to run under OS/2?"

Reportedly, the IBM engineer looked a bit pained and said "we don't patch it -- we superclass it in memory to make it a good citizen, well-behaved inside OS/2's protected mode."

(It is over 30Y ago so forgive me if I am not verbatim.)

This was subsequently demonstrated to be true rather than a marketing claim. OS/2 2.0 and 2.1 included a "WinOS2" environment. OS/2 Warp 3 made this an option: it was sold in 2 versions, one with a blue box which contained a Windows 3.1 environment, and one with a red box which did not contain WinOS2 but could be installed on top of an existing Windows 3.1 system and then took over the entire Windows environment, complete with all your installed apps, and ran that inside OS/2.

So you kept all your installed 16-bit apps and settings but got a new 32-bit OS with full memory protection and pre-emptive multitasking as well.

Bear in mind that Windows has not activation mechanism then, so you could copy a complete Windows 3.x installation onto a new PC, change some drivers and it just worked without complaint.

So you could buy a new high-end 486 PC, copy Windows off your old 386, install OS/2 Warp over the top and have a whole new OS with all your apps and their files still running.

This was amazingly radical stuff in the first half of the 1990s.


They invested huge amount of resources to make sure it is backward compatible in NT, to the disagreement of David Cutler. There is a NTVDM extended with WoW for Windows 16-bit, AFAIK. I have a copy of leaked source code of NT 3.5 but I'm not good enough to understand the code. Also probably modern VMs such as DOSBOX do a better job emulating 16-bit stuffs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: