Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.

As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.

And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.

I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.

One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.

And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.

Then consider the spread operator, and how much you might see it in TypeScript code:

const foo = {

  ...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is

  newPropertyValue,
};

// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"

const foo = [...array, newItem];

And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()

They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.

And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.



> We shouldn't forget that there are trade-offs

Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.

"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.

So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?

That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.

But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.


> But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.

I would have agreed with that statement a few years ago.

But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"

And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.

Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.

Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.

If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.

If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.

In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.

And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.

In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."

At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.

So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.


Alright, I admit, I have not worked on teams where immutable.js was used a lot, so I don't have any insight specifically on its impact on performance.

Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.

For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.

But sure, I agree with everything you say - Immutability is great in general, but not for every given case.


Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: