Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s interesting to watch other languages discover the benefits of immutability. Once you’ve worked in an environment where it’s the norm, it’s difficult to move back. I’d note that Clojure delivered default immutability in 2009 and it’s one of the keys to its programming model.


I don't think the benefits of immutability haven't been discovered in js. Immutable.js has existed for over a decade, and JavaScript itself has built in immutability features (seal, freeze). This is an effort to make vanilla Typescript have default immutable properties at compile time.


It doesn't make sense to say that. Other languages had it from the start, and it has been a success. Immutable.js is 10% as good as built-in immutability and 90% as painful. Seal/freeze,readonly, are tiny local fixes that again are good, but nothing like "default" immutability.

It's too late and you can't dismiss it as "been tried and didn't get traction".


That's not what I said, and that's not what my reply is about. The value of immutability is known. That's the point of this post. The author isn't a TC39 member (or at least I don't think they are). They're doing what they can with the tools they have.


You didn't understand what you were replying to. Immutability cannot be discovered later on in that sense (in practice).


Javascript DOES NOT in fact have built-in immutability similar to Clojure's immutable structures - those are shallow, runtime-enforced restrictions, while Clojure immutable structures provide deep, structural immutability. They are based on structural sharing and are very memory/performance efficient.

Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.


I didn't say that it does have exhaustive immutability support. I said the value of it is known. They wouldn't have added the (limited) support that they did if they didn't understand this. The community wouldn't have built innumerable tools for immutability if they didn't understand the benefits. And in any case, you can't just shove a whole different model of handling objects into a thirty year old language that didn't see any truly structural changes until ten years ago.


> I didn't say that it does have exhaustive immutability support

seal and freeze in js are not 'immutability'. You said what you said - "JavaScript itself has built in immutability features (seal, freeze)".

I corrected you, don't feel bad about it. It's totally fine to not to know some things and it's completely normal to be wrong on occasion. We are all here to learn, not to argue who's toy truck is better. Learning means going from state of not knowing to the state of TIL.

> you can't just shove a whole different model of handling objects into a thirty year old language

Clojurescript did. Like 14-15 years ago or so. And it's not so dramatically difficult to use. Far more simpler than Javascript, in fact.


Your toy truck is being overly pedantic


I am not being pedantic, there's critical fundamental conceptual difference that has real implications for how people write and reason about code.

There's performance reasoning, different level of guarantees, and entirely different programming model.

When someone hears "JS has built-in immutability features", they might think, "great, why do I even need to look at Haskell, Elixir, Clojure, if I have all the FP features I need right here?". Conflating these concepts helps no one - it's like saying: "wearing a raincoat means you're waterproof". Okay, you're technically not 100% wrong, but it's so misleading that it becomes effectively wrong for anyone trying to understand the actual concept.


Sure, though Immutability.js did have persistent data structures like Clojure.


yeah, immutability.js is a solid engineering effort to retrofit immutability onto a mutable-first language. It works, but: it's never as ergonomic as language-native immutability and it just feels like you're swimming upstream against JS defaults. It's nowhere near Clojure's elegance. Clojure ecosystem assumes immutability everywhere and has more mature patterns built around it.

In Clojure, it just feels natural. In js - it feels like extra work. But for sure, if I'm not allowed to write in Clojurescript, immutability.js is a good compromise.


I meant to point out that of course there is value in immutability beyond shared datastructures.

I tried Immutability.js back in the day and hated it like any bolted-on solution.

Especially before Typescript, what happened is that you'd accidentally assign foo.bar = 42 when you should have set foo.set('bar', 42) and cause annoying bugs since it didn't update anything. You could never just use normal JS operations.

Really more trouble than it was worth.

And my issue with Clojure after using it five years is the immense amount of work it took to understand code without static typing. I remember following code with pencil and paper to figure out wtf was happening. And doing a bunch of research to see if it was intentional that, e.g. a user map might not have a :username key/val. Like does that represent a user in a certain state or is that a bug? Rinse and repeat.


> immense amount of work it took to understand code without static typing.

I've used it almost a decade - only felt that way briefly at the start. Idiomatic Clojure data passing is straightforward once you internalize the patterns. Data is transparent - a map is just a map - you can inspect it instantly, in place - no hidden state, no wrapping it in objects. When need some rigidity - Spec/Malli are great. A missing key in a map is such a rare problem for me, honestly, I think it's a design problem, you cannot blame dynamically-typed lang for it, and Clojure is dynamic for many good reasons. The language by default doesn't enforce rigor, so you must impose it yourself, and when you don't, you may get confused, but that's not the language flaw - it's the trade-off of dynamic typing. On the other hand, when I want to express something like "function must accept only prime numbers", I can't even do that in statically typed language without plucking my eyebrow. Static typing solves some problems but creates others. Dynamic typing eschews compile-time guarantees but grants you enormous runtime flexibility - trade-offs.


one thing that it's missing in JS to fully harness the benefits of immutability is some kind of equality semantics where two identical objects are treated the same


They were going to do this with Records and Tuples but that got scrapped for reasons I’m not entirely clear on.

It appears a small proposal along these lines has appeared in then wake of that called Composites[0]. It’s a less ambitious version certainly.

[0]: https://github.com/tc39/proposal-composites


Records and Tuples were scrapped, but as this is JavaScript, there is a user-land implementation available here: https://github.com/seanmorris/libtuple


Userland implementations are never as performant as native implementations. That's the whole point of trying to add immutability to the standard.


even when performance might not be an issue or an objective, there are other concerns about an user land implementation: lack of syntax is a bummer, and lack of support in the ecosystem is the other giant one - for example, can I use this as props for a React component?


yes, I'm aware of composites (and of the sad fate of Records and Tuples) and I'm hopeful they will improve things. One thing that I'm not getting from the spec is the behavior of the equality semantics in case a Date (or a Temporal object) is part of the object.

In other words, what is the result of Composite.equal(Composite({a: new Date(2025, 10, 19)}, Composite({a: new Date(2025, 10, 19)})? What is the result of Composite.equal(Composite({a: Temporal.PlainDate(2025, 10, 19)}, Composite({a: PlainDate(2025, 10, 19)})?


Also, interestingly Clojurescript compiler in many cases emits safer js code despite being dynamically typed. Typescript removes all the type info from emmitted js, while Clojure retains strong typing guarantees in compiled code.


Mutability is overrated.


Immutability is also overrated. I mostly blame react for that. It has done a lot to push the idea that all state and model objects should be immutable. Immutability does have advantages in some contexts. But it's one tool. If that's your only hammer, you are missing other advantages.


The only benefit to mutability is efficiency. If you make immutability cheap, you almost never need mutability. When you do, it’s easy enough to expose mechanisms that bypass immutability. For instance in Clojure, all values are immutable by default. Sometimes, you really want more efficiency and Clojure provides its concept of “transients”[1] which allow for limited modification of structures where that’s helpful. But even then, Clojure enforces some discipline on the programmer and the expectation is that transient structures will be converted back to immutable (persistent) structures once the modifications are complete. In practice, there’s rarely a reason to use transients. I’ve written a lot of Clojure code for 15 years and only reached for it a couple of times.

[1] https://clojure.org/reference/transients


Immutability is really valuable for most application logic, especially:

- State management

- Concurrency

- Testing

- Reasoning about code flow

Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"

Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'. In immutable-first languages - Clojure, Haskell, Elixir immutability feels like a superpower. In Javascript, it feels like a chore.


A lot of these concepts don't mean anything to most developers I've found. A lot of the time I struggle to get the guy I work with to compile and run his code. Even something relatively simple as determinism and pure functions just isn't happening.

This is shockingly common and most developers will never ever hear of Clojure, Haskell or Elixir.

I really feel there is like two completely different developer worlds. One where these things are discussed and the one I am in where I am hoping that I don't have to make a teams call to tell a guy "please can you make sure you actually run the code before making a PR" because my superiors won't can him.


Well, yes, if your shop hires poorly, immutability won’t save you. In fact, nothing will save you.


> Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"

I think immutability is good, and should be highly rated. Just not as highly rated as it is. I like immutable structures and use them frequently. However, I sometimes think the best solution is one that involves a mutable data structure, which is heresy in some circles. That's what I mean by over-rated.

Also, kind of unrelated, but "state management" is another term popularized by react. Almost all programming is state management. Early on, react had no good answer for making information available across a big component tree. So they came up with this idea called "state management" and said that react was not concerned with it. That's not a limitation of the framework see, it's just not part of the mission statement. That's "state management".

Almost every programming language has "state management" as part of its fundamental capabilities. And sometimes I think immutable structures are part of the best solution. Just not all the time.


I think we're talking past each other.

> I like immutable structures and use them frequently.

Are you talking about immutable structures in Clojure(script)/Haskell/Elixir, or TS/JS? Because like I said - the difference in experience can be quite drastic. Especially in the context of state management. Mutable state is the source of many different bugs and frustration. Sometimes it feels that I don't even have to think of those in Clojure(script) - it's like the entire class of problems simply is non-existent.


Of the languages you listed, I've really only used TS/JS significantly. Years ago, I made a half-hearted attempt to learn Haskell, but got stuck on vocabulary early on. I don't have much energy to try again at the moment.

Anyway, regardless of the capabilities of the language, some things work better with mutable structures. Consider a histogram function. It takes a sequence of elements, and returns tuples of (element, count). I'm not aware of an immutable algorithm that can do that in O(n) like the trivial algorithm using a key-value map.


> I made a half-hearted attempt to learn Haskell

Try Clojure(script) - everything that felt confusing in Haskell becomes crystal clear, I promise.

> Consider a histogram function.

You can absolutely do this efficiently with immutable structures in Clojure, something like

      (reduce (fn [acc x]
                (update acc x (fn [v] (inc (or v 0)))))
              {}
              coll)
This is O(n) and uses immutable maps. The key insight: immutability in Clojure doesn't mean inefficiency. Each `update` returns a new map, but:

1. Persistent data structures share structure under the hood - they don't copy everything

2. The algorithmic complexity is the same as mutable approaches

3. You get thread-safety and easier reasoning for a bonus

In JS/TS, you'd need a mutable object - JS makes mutability efficient, so immutability feels awkward.

But Clojure's immutable structures are designed for this shit - they're not slow copies, they're efficient data structures optimized for functional programming.


> immutability in Clojure doesn't mean inefficiency.

You are still doing a gazillion allocations compared to:

  for (let i = 0; i < data.length; i++) { hist[data[i]]++; }
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.


Sure, that’s faster. But do you really care? How big is your data? How many distinct things are you counting? What are their data types? All that matters. It’s easy to write a simple for-loop and say “It’s faster.” Most of the time, it doesn’t matter that much. When that’s the case, Clojure allows you to operate at a higher level with inherent thread safety. If you figure out that this particular code matters, then Clojure gives you the ability to optimize it, either with transients or by dropping down into Java interop where you have standard Java mutable arrays and other data structures at your disposal. When you use Java interop, you give up the safety of Clojure’s immutable data structures, but you can write code that is more optimized to your particular problem. I’ll be honest that I’ve never had to do that. But it’s nice to know that it’s there.


The allocation overhead rarely matters in practice - in some cases it does. For majority of "general-purpose" tasks like web-services, etc. it doesn't - GC is extremely fast; allocations are cheap on modern VMs.

The second point I don't even buy anymore - once you're used to `reduce`, it's equally (if not more) readable. Besides, in practice you don't typically use it - there are tons of helper functions in core library to deal with data, I'd probably use `(frequencies coll)` - I just didn't even mentioned it so it didn't feel like I'm cheating. One function call - still O(n), idiomatic, no reduce boilerplate, intent is crystal clear. Aggressively optimized under the hood and far more readable.

Let's not get into strawman olympics - I'm not selling snake oil. Clojure wasn't written in some garage by a grad student last week - it's a mature and battle-tested language endorsed by many renowned CS people, there are tons of companies using it in production. In the context of (im)mutability it clearly demonstrates incontestable, pragmatic benefits. Yes, of course, it's not a silver bullet, nothing is. There are legitimate cases where it's not a good choice, but you can argue that point pretty much about any tool.


If there was a language that didn't require pure and impure code to look different but still tracked mutability at the type level like the ST monad (so you can't call an impure function from a pure one) - so not Clojure - then that'd be perfect.

But as it stands immutability often feels like jumping through unnecessary hoops for little gain really.


> then that'd be perfect.

There's no such thing as "perfect" for everyone and for every case.

> feels like jumping through unnecessary hoops for little gain really.

I dunno what you're talking about - Apple runs their payment backend; Walmart their billing system; Cisco their cybersec stack; Netflix their social data analysis; Nubank empowers entire Latin America - they all running Clojure, pushing massive amounts of data through it.

I suppose they just have shitload of money and can afford to go through "unnecessary hoops". But wait, why then tons of smaller startups running on Clojure, on Elixir? I guess they just don't know any better - stupid fucks.


The topic was immutability, not Clojure?

But ok, if mutability is always worse, why not use a pure language then? No more cowardly swap! and transient data structures or sending messages back and forth like in Erlang.

But then you get to monads (otherwise you'd end up with Elm and I'd like to see Apple's payment backend written in Elm), monad transformers, arrows and the like and coincidentally that's when many Clojure programmers start whining about "jumping through unnecessary hoops" :D

Anyway, this was just a private observation I've reached after being an FP zealot for a decade, all is good, no need to convert me, Clojure is cool :)


> Clojure is cool

Clojure is not "cool". Matter of fact, for a novice it may look distasteful, it really does. Ask anyone with a prior programming experience - Python, JS, Java to read some Clojure code for the first time and they start cringing.

What Clojure actually is - it is "down to earth PL", it values substance over marketing, prioritizes developers happiness in the long run - which comes in a spectrum; it doesn't pretend everyone wants the same thing. A junior can write useful code quickly, while someone who wants to dive into FP theory can. Both are first-class citizens.


> If there was a language that didn't require pure and impure code to look different

I've occasionally wondered what life would be like if I tried writing all my pure Haskell code in the Identity monad.


Same!


Next time I feel an itch to learn a language, I'll probably pick Clojure, based mostly on this comment. Not sure when that will be though.


One doesn't need to "wear a tie" to learn Clojure - syntax is so simple it can be explained on a napkin. You need to get:

1. An editor with structural editing features - google: "paredit vim/emacs/sublime/etc.", on VSCode - simply install Calva.

2. How to connect to the REPL. Calva has the quickstart guide or something like that.

3. How to eval commands in place. Don't type them directly into the REPL console! You can, but that's not how Lispers typically work. They examine the code as they navigate/edit it - in place. It feels like playing a game - very interactive.

That's all you need to know to begin with. VSCode's Calva is great to mess around it. Even if you don't use it (I don't), it's good for beginners.

Knowing Clojure comes super handy, even when you don't write any projects in it - it's one of the best tools to dissect some data - small and large. I don't even deal with json to inspect some curl results - I pipe them through borkdude/jet, then into babashka and in the REPL I can filter, group, sort, slice, dice, salt & pepper that shit, I can even throw some visualizations on top - it looks delicious; and it takes not even a minute to get there - if I type fast enough, I slash through it in seconds!

Honestly, Clojure feels to be the only no bullshit, no highfalutin, no hidden tricks language in my experience, and jeeeesus I've been through just a bit more than a few - starting with BASIC in my youth and Pascal and C in college; then Delphi, VB, then dotnet stuff - vb.net, c#, f#, java, ruby; all sorts of altjs shit - livescript, coffeescript, icedcoffeescript, gorillascript, fay, haste, ghcjs, typescript, haskell, python, lua, all sorts of Lisps; even some weird language where every operator was in Russian; damn, I've been trying to write some code for a good while. I'm stupid or something but even in years I just failed to find a perfect language to write perfect code - all of dem feel like they got made by some motherfluggin' annoyin' bilge-suckin' vexin' barnacle-brained galoots. Even my current pick of Clojure can be sometimes annoying, but it's the least irksome one... so far. I've been eyeing Rust and Zig, and they sound nice (but every one of dem motherfuckers look nice before you start fiddling with 'em) yet ten years from now, if I'm still kicking the caret, I will be feeding some data into a clj repl, I'm tellin' ya. That shit just fucking works and makes sense to me. I don't know how making it stop making sense, it just fucking does.


I just want a way of doing immutability until production and let a compiler figure out how to optimize that into potentially mutable efficient code since it can on those guarantees.

No runtime cost in production is the goal


> No runtime cost in production is the goal

Clojure's persistent data structures are extremely fast and memory efficient. Yes, it's technically not a complete zero-overhead, pragmatically speaking - the overhead is extremely tiny. Performance usually is not a bottleneck - typically you're I/O bound, algorithm-bound, not immutability-bound. When it truly matters, you can always drop to mutable host language structures - Clojure is a "hosted" language, it sits atop your language stack - JVM/JS/Dart, then it all depends on the runtime - when in javaland, JVM optimizations feel like blackmagicfuckery - there's JIT, escape analysis (it proves objects don't escape and stack-allocates them), dead code elimination, etc. For like 95% of use cases using immutable-first language (in this example Clojure) for perf, is absolutely almost never a problem.

Haskell is even more faster because it's pure by default, compiler optimizes aggressively.

Elixir is a bit of a different story - it might be slower than Clojure for CPU-bound work, but only because BEAM focuses on consistent (not peak) performance.

Pragmatically, for the tasks that are CPU-bound and the requirement is "absolute zero-cost immutability" - Rust is a great choice today. However, the trade off is that development cycle is dramatically slower in Rust, that compared to Clojure. REPL-driven nature of Clojure allows you to prototype and build very fast.

From many different utilitarian points, Clojure is enormously practical language, I highly recommend getting some familiarity with it, even if it feels very niche today. I think it was Stu Halloway who said something like: "when Python was the same age of Clojure, it was also a niche language"


This doesn’t make much sense. One of the benefits of immutability is that once you create a data structure, it doesn’t change and you can treat it as a value (pass it around, share it between threads without cloning it, etc.). If you now allow modifications, you’re suddenly violating all those guarantees and you need to write code that defensively makes clones, so you’re right back where you started. In Clojure, you can cheat at points with transients where the programmer knows that a certain data structure is only seen by a single thread of execution, but you’re still immutable most of the time.


Depends on your target. Clojure targets the JVM by default and that has very different constraints than say, compiling to JavaScript for the browser or node.

Compiling to a JS engine this would be great because immutability has a runtime cost


Clojurescript supports transients https://clojureverse.org/t/transients-in-clojurescript/9102/...

Runtime cost of using Clojurescipt is undeniably there but for most applications is pretty negligible price to pay for the big wins. In practice, Clojurescript apps can often perform faster than similar apps built traditionally - especially for things like render optimization - immutable data enables cheap equality checks for memoization, it prevents unnecessary re-renders; data transform pipelines - transducers give you lazy evaluation, it's great for filtering/mapping through large datasets; caching - immutable data is safe to cache indefinitely, you don't have to worry about stale data;

You guys keep worrying about some theoretical "costs" - in practice, I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used. Situations where it incurs a practical cost to pay are outliers, not a general rule.


> but for most applications is pretty negligible

Not all, and it is always preferable for it not to have a cost.

> I have yet to encounter a problem that genuinely makes it so impossibly slow that Clojuresript just outright can't be used

There’s a big swath of work that could benefit from the development streamlining that something like clojurescript or similar projects can give but any performance hit is deadly, like e-commerce.

There is also the fact that it doesn’t have 100% bindings to the raw JS and DOM APIs if I recall correctly, it’s often wrapped around React or assumed to be


> it is always preferable not to have a cost.

Of course. That's why every programmer today is fluent in assembly.

> like e-commerce.

You're misinformed. I don't know where you're getting all this, but I have done enough front-end work and have built ecommerce solutions - clojurescript works better than many different things I've used, and believe me - I've tried more than just a few things over the years - livescript, gorillascript, coffeescript, icedcoffescript, typescript, haste, fay, ghcjs; I've considered elm, purescript and reasonml. The only thing I have not built with it for web is games. That's the only domain where theoretically I may have hit limitations, but because I've never done that in practice, I can't even say what those limitations could be, but I know, people have done that with great success.

Pitch.com have built their platform for presentations in clojurescript, Whimsical - their chart drawing boards, and you don't even have to imagine - that shit would need to squeeze out every drop of a cycle for canvas/WebGL and DOM updates. Most web app bottlenecks are architectural - bad algorithms, inefficient data structures, unnecessary re-renders, and Clojurescript is just amazing to deal with these kinds of problems.

> bindings to the raw JS and DOM APIs

You're saying nonsense. Clojurescript directly can call any javascript function and whatever APIs. Please at least google, or ask an LLM before blurting out nonsense like this. I don't know what emotional level of insecurities you're dealing with, but I advise you to try out things instead of prematurely reaching the level of "I hate this" without even understanding what you're actually hating. It's not that hard. These days you don't even have to learn all the intricacies of compiling it, you can use Squint - the light-weight dialect of Clojurescript. It's as easy as using a regular script tag.

https://github.com/squint-cljs/squint

Or don't use it, who cares? Stay in your mental FUD castle, letting theoretical constraints become real ones by never testing their boundaries. I have used Clojurescript, I liked it and will use it again - I have seen with my own eyes how it actually works much better than any other alternatives I have tried so far - in practical web apps that I shipped to production. I'm not married to it - it simply makes sense. For many practical reasons it does. Whenever it stops making sense and I find a better alternative, I will switch without hesitation. For now, it just works for me.


> Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'.

I felt that way in the latest versions of Scheme, even. It’s bolted on. In contrast, in Clojure, it’s extremely fundamental and baked in from the start.


exactly, react could not deal with mutable object so they decided to make immutability seem to be something that if you did not use before you did not understood programming.


React made immutability patterns more relevant which increased discussion of it. Some people did get preachy about it. Yet dismissing immutability entirely just because of that misses the entire point of why it's actually useful in managing complex state.

Have you ever thought about instead of having emotional reaction to obnoxious gatekeeping to learn about actual benefits of immutability?


It's redundant in single thread environment. Everyone moved to mobile while pages are getting slower and slower, using more and more memory. This is not the way. Immutability has its uses, but it's not good for most web pages.


You're just waving off the whole bag of benefits:

Yes, js runs in a single-threaded environment for user code, but immutability still provides an immense value: predictability, simpler debugging, time-travel debugging, react/framework optimizations.

Modern js engines are optimized for short-lived objects, and creating new objects instead of mutating uses more memory only temporarily. Performance impact of immutability is so absolutely negligible compared to so many other factors (large bundles, unoptimized images, excessive DOM manipulation)

You're blaming the wrong thing for overblown memory. I don't know a single website that is bloated and slow only because the makers decided to use immutable datastructures. In fact, you might be exactly incorrect - maybe web pages getting slower and slower because we're now trying to have more logic in them, building more sophisticated programs into them, and the problem is exactly that - we are reaching the point that is no longer simple to reason about them? Reasoning about the code in an immutable-first PL is so much simpler, you probably have no idea, otherwise you wouldn't be saying "this is not the way"


If we are pointing dates, ML did it in 1973, or if you prefer the first mature implementation SML, in 1983.

The Purely Functional Data Structures book, that Clojure data structures are based on, is from 1996.

This is how far back we're behind the times.


Cool. I didn’t realize ML had such a focus on immutability as well. I have never done any serious work in ML and it’s a hole in my knowledge. I have to go back and do a project of some sort using it (and probably one in Ocaml as well). What data structures does ML use under the hood to keep things efficient? Clojure uses Bagwell’s Hashed Array-Mapped Tries (HAMT), but Bagwell only wrote the first papers on that in about 2000. Okasaki’s book came out in 1998, and much of the work around persistent data structures was done in the late 1980s and 1990s. But ML predates most of that, right?


programming with immutability has been best practices in js/ts for almost a decade

however, enforcing it is somewhat difficult & there are still quite a bit lacking with working with plain objects or maps/sets.


We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.

As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.

And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.

I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.

One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.

And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.

Then consider the spread operator, and how much you might see it in TypeScript code:

const foo = {

  ...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is

  newPropertyValue,
};

// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"

const foo = [...array, newItem];

And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()

They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.

And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.


> We shouldn't forget that there are trade-offs

Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.

"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.

So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?

That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.

But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.


> But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.

I would have agreed with that statement a few years ago.

But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"

And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.

Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.

Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.

If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.

If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.

In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.

And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.

In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."

At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.

So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.


Alright, I admit, I have not worked on teams where immutable.js was used a lot, so I don't have any insight specifically on its impact on performance.

Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.

For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.

But sure, I agree with everything you say - Immutability is great in general, but not for every given case.


Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: