Hacker Newsnew | past | comments | ask | show | jobs | submit | feoren's commentslogin

If this doesn't happen, are you going to accept that you were wrong, or are you going to ignore it and be off spreading unfounded anger about some other imagined offense?


If that doesn’t happen how will he pay for all the stuff he wants to give away? The money has to come from somewhere.


Moving money from other spending?


> Why would you instantiate a class and call it result‽

Are you suggesting that the results of calculations should always be some sort of primitive value? It's not clear what you're getting hung up on here.


No, the result of a calculation could be a key value or list or other compound value - whatever the result is. I am getting hung up on deceptive naming. If you have a 'result', the calculation is done. You have a result.


> For example disassembling a microwave

Getting this wrong will not be a learning experience, because it will kill you. This is an incredibly dangerous thing to do and should only be done by people who already know what they're doing.

That's not just a tangential tidbit -- you don't learn well when you are completely out of your depth. You learn well when you are right at the edge of your ability and understanding. That involves risk of failure, but the failure isn't the important part, operating on the boundary is.


Yes, disassembling and tinkering with microwave is almost the only easily available thing I would not recommend at all because it seems so ordinary but actual danger is way too high and hidden. Even if you know exactly the dangers, it takes only one distracted move, just being tired or too complacent to end really badly. Like even some medium time disconnected and expectedly safe units can have real nasty shock in them when discharge resistor is faulty, that failure mode does not usually kill but not recommended.


People's own opinions about what their name is is not a "non-issue", shitty-ass governments or not. Declaring a people's opinions about names stupid and irrelevant (or even illegal) is one of the many ways majorities oppress or even commit slow genocide against minorities.


> Declaring a people's opinions about names stupid and irrelevant (or even illegal) is one of the many ways majorities oppress or even commit slow genocide against minorities.

My point was governments do this all the time and it is a far cry from fascism. Elsewhere in the thread, it is mentioned that often times you have to compromise when registering a name in a different country (for instance, if the language does not contain a phoneme used in your name). In that case, you have to conform to the country's culture and language. Under that lens, banning names that violate cultural norms is not so crazy.


There are reasonable regulations and unreasonable regulation. The idea that since some regulation exist, it would be totally the allow any other rule is absurd.

Yes, people (specifically women) with strong opinion on the suffix of their name exist and proper solution of government is to butt off that decision. This is no the norm worth keeping by force.


The relevant laws in many Western countries today exist so that children don't get saddled with patently stupid names by their parents (see also: Elon Musk and his kids).


You hope it doesn't?

> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features

This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.

There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.

Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.


> There's no reason we can't be writing code that lasts 100 years. Code is just math.

In theory, yes. In practice, no, because code is not just math, it's math written in a language with an implementation designed to target specific computing hardware, and computing hardware keeps changing. You could have the complete source code of software written 70 years ago, and at best you would need to write new code to emulate the hardware, and at worst you're SOL.

Software will only stop rotting when hardware stops changing, forever. Programs that refuse to update to take advantage of new hardware are killed by programs that do.


This is a total red herring, x86 has over 30 years of backwards compatability and the same goes for the basic peripherals.

The real reason for software churn isn't hardware churn, but hardware expansion. It's well known that software expands to use all available hardware resources (or even more, according to Wirth's law).


30 years ago, right before Windows 95 came out, Windows was a 16-bit OS and the modern versions of Windows no longer support 16-bit programs. PCIe came out only in 2003, and I don't know that PCIe slots can support PCI. SATA is also from 2003. Even USB originally came out in 1996, and the only pre-USB connector slot I have on my computer is a PS/2 port (which honestly surprises me). For monitor connections, VGA and DVI (1999!) have died off, and their successors (HDMI, DisplayPort) are only in the 2000's.

So pretty much none of the peripherals--including things like system memory and disk drives, do note--from a computer in 1995 can talk using any of the protocols a modern computer supports (save maybe a mouse and keyboard) and require compatibility adapters to connect, while also pretty much none of the software works without going through custom compatibility layers. And based on my experience trying to get a 31-year old Win16 application running on a modern computer, those compatibility layers have some issues.


PCIe is mostly backwards compatible with PCI, and bridge chips used to be quite common. ISA to PCI is harder, but not unheard of.

"SATA" stands for "serial ATA", and has the same basic command set as the PATA from 1984 - bridge chips were widely used. And it all uses SCSI, which is also what USB Mass Storage Devices use. Or if you're feeling fancy, there's a whole SCSI-to-NVMe translation standard as well.

HDMI is fully compatible with single-land DVI-D, you can find passive adapters for a few bucks.

There's one port you forgot to mention: ethernet! A brand-new 10Gbps NIC will happily talk with an ancient 10Mbps NIC.

It might look different, but the pc world is filled with ancient technology remnants, and you can build some absolutely cursed adapter stacks. If anything, the limiting factor is Windows driver support.


Slight caveat that a lot of Ethernet PHYs > 1G don't go down to 10 Mb, my some don't go to 100 Mb, and some are only the speed they want to be (though luckily that's not very common). There exist 6-speed PHYs (10,100,1000,2500,5000,10000) but that doesn't mean everything will happily talk


You're confusing quite a few things together.

The basic peripherals (keyboard and monitor) of today still present the same interface as they did back in the IBM PC era. Everything else is due to massive hardware expansion, not hardware churn.

How often do you update your drivers compare to your typical internet connected app? Software that handles the idiosyncrasies of the hardware (aka drivers) generally has a much longer lifespan than most other software; I don't see how you can reasonably say hardware breaking backwards compatibility is why software keeps changing.


Python programs do not care about SATA/PCI-E.


Python programs run on an interpreter, which runs on an OS, which has drivers that run on a given piece of hardware. All of the layers of the stack need to be considered and constantly maintained in order for preservation to work.


Some do, most don't. (Don't generalize)


Try running software from 1995 on a brand new system and you'll find all sorts of fun reasons why it's more complicated than that.


I don’t think I can take that claim by itself as necessarily implying the cause is hardware. Consumer OSes were on the verge of getting protected memory at that time, as an example of where things were, so if I imagine “take an old application and try to run it” then I am immediately imagining software problems, and software bit rot is a well-known thing. If the claim is “try to run Windows 95 on bare metal”, then…well actually I installed win98 on a new PC about 10 years ago and it worked. When I try to imagine hardware changes since then that a kernel would have to worry about, I’m mostly coming up with PCI Express and some brave OEMs finally removing BIOS compatibility and leaving only UEFI. I’m not counting lack of drivers for modern hardware as “hardware still changes” because that feels like a natural consequence of having multiple vendors in the market, but maybe I could be convinced that is a fundamental change in and of itself…however even then, that state of things was extremely normalized by the 2000s.


Drivers make up a tiny portion of the software on our computer by any measure (memory or compute time) and they're far longer lasting than your average GUI app.


On the other hand, the main reason why Y2K happened was because a lot of major orgs would rather emulate software from the 60s forever than rewrite it. I'm talking like ancient IBM mainframe stuff, running on potentially multiple layers of emulation and virtualization.

We rewrite stuff for lots of reasons, but virtualization makes it easy enough to take our platforms with us even as hardware changes.


Pretty sure if I downloaded and compiled Tcl/Tk 7.6.x source code on a modern Linux box, it would run my Tcl/Tk 7.6.x "system monitor" code from 1995 or 1996 just fine.


Do you have any examples that aren't because of the OS (as in, not trying to run a 90's game on Windows 11) or specialized hardware (like an old Voodoo GPU or something)?


The whole point is that everything changes around software. Drivers, CPUs, GPUs, web browsers, OSs, common libraries, etc. Everything changes.

It doesn't matter if x86 is backwards compatible if everything else has changed.

No code can last 100 years in any environment with change. That's the point.


If you restrict yourself to programs that don't need an OS or hardware, you're going to be looking at a pretty small set of programs.


I don't, but I do restrict that you run it on the same OS as it was designed for.


The program may work fine on its original OS, and the OS may work fine on its original hardware, but for someone trying to actually run their business or what have you on the software these facts are often not particularly helpful.


Backwards-compatibility in OSes is the exception, not the rule. IBM does pretty well here. Microsoft does okay. Linux is either fine or a disaster depending on who you ask. MacOS, iOS, and Android laugh at the idea. And even the OSes most dedicated to compatibility devote a ton of effort to ensuring it on new hardware.


x86 doesn't have magical backwards compatibility powers.

The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.


> x86 doesn't have magical backwards compatibility powers.

I never said it did; other ISAs have similar if not longer periods of backwards compatability (IBM's Z systems architecture is backwards compatible with the System/360 released in 1964).

> The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.

I never mentioned Windows but it's ridiculous to imply its backwards compatability is all on Microsoft. Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.


>I never mentioned Windows but it's ridiculous to imply its backwards compatability is all on Microsoft.

I never said that. Windows was just an easy example.

>Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.

- The shift from 16-bit to 32-bit protected mode with the Intel 80386 processor that fundamentally altered how the processor managed memory.

- Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.

- The shift to x86-64 that Microsoft had to compensate with emulation and WOW64

Any many more. That you think otherwise just shows all the effort that has been done.


> The shift from 16-bit to 32-bit protected mode with the Intel 80386 processor that fundamentally altered how the processor managed memory.

I said x86 has "over 30 years of backwards compatibility". The 80386 was released in 1985, 40 years ago :)

> Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.

This is the only breaking change in x86 that I'm aware of and it's a rather light one as it only affected programs relying on an exactly 2^16 memory space. And, again, that was over 40 years ago!

> The shift to x86-64 that Microsoft had to compensate with emulation and WOW64

No, I don't think so. A x86-64 CPU starts in 32 bit mode and then has to enter 64 bit mode (I'd know, I spent many weekends getting that transition right for my toy OS). This 32 bit mode is absolutely backwards compatible AFAIK.

WOW64 is merely a part of Microsoft's OS design to allow 32 bit programs to do syscalls to a 64 bit kernel, as I understand it.


The bare minimum cost of software churn is the effort of one human being, which is far less than hardware churn (multiple layers of costly design and manufacturing). As a result, we see hardware change gradually over the years, while software projects can arbitrarily deprecate, change, or remove anything at a whim. The dizzying number of JS frameworks, the replacement of X with Wayland or init with systemd, removal of python stdlib modules, etc. etc. have nothing to do with new additions to the x86 instruction set.


TeX is written in a literate programming style which is more akin to a math textbook than ordinary computer code, except with code blocks instead of equations. The actual programming language in the code blocks and the OS it runs on matters a lot less than in usual code where at best you get a few sparse comments. Avoiding bit rot in such a program is a very manageable task. In fact, iirc the code blocks which end up getting compiled and executed for TeX have been ported from Pascal to C at some point without introducing any new bugs.


The C version of TeX is also terrible code in the modern day (arbitrary limits, horrible error handling, horrible macro language, no real Unicode support, etc. etc), hence LuaTeX (et al.) and Typst and such.

The backward-compat story is also oversold because, yes, baseline TeX is backward compatible, but I bet <0.1% of "TeX" document don't use some form of LaTeX and use any number of packages... which sometimes break at which point the stability of base TeX doesn't matter for actual users. It certainly helps for LaTeX package maintainers, but that doesn't matter to users.

Don't get me wrong, TeX was absolutely revolutionary and has been used for an insane amount of scientific publishing, but... it's not great code (for modern requirements) by any stretch.


> and computing hardware keeps changing.

Only if you can't reasonably buy a direct replacement. That might have been a bigger problem in the early days of computing where people spread themselves around, leaving a lot of business failures and thus defunct hardware, but nowadays we all usually settle on common architectures that are very likely to still be around in the distant future due to that mass adoption still providing strong incentive for someone to keep producing it.


This is correct when it comes to bare metal execution.

You can always run code from any time with emulation, which gives the “math” the inputs it was made to handle.

Here’s a site with a ton of emulators that run in browser. You can accurately emulate some truly ancient stuff.

https://www.pcjs.org/


Given how mature emulation is now why couldn't that just continue to be possible into the future?


Each new layer of emulation is new code that needs to be written that wasn't required when the original program in question was written. It's a great approach for software preservation, but the fact that it's necessary shows why the approach of "if it ain't broke, don't fix it" doesn't work. The context of computing is changing around us at all times, and hardware has a finite lifespan.


Eh. Emulators are often tiny in comparison to the programs they emulate. Especially when performance isn't so much of a concern - like when you're emulating software written for computers from many decades ago. A good emulator can also emulate a huge range of software. Just look at programs like dosbox and the like. Or Apple's great work with Rosetta and Rosetta2 - which are both complex, but much less complex than all the software they supported. Software like Chrome, Adobe Photoshop and the Microsoft office suite.

Arguably modern operating systems are all sort of virtual machine emulators too. They emulate a virtual computer which has special instructions to open files, allocate memory, talk to the keyboard, make TCP connections, create threads and so on. This computer doesn't actually exist - its just "emulated" by the many services provided by modern operating systems. Thats why any windows program can run on any other windows computer, despite the hardware being totally different.


Or get an IBM 360 and have support for the next two thousand years, which is the choice our parents made.


Rich parents.


This is possible, and ubiquitous. Your terminal runs on an emulator of an emulator of a teletype.


Are you by chance a Common Lisp developer? If not, you may like it (well, judging only by your praise of stability).

Completely sidestepping any debate about the language design, ease of use, quality of the standard library, size of community, etc... one of its strengths these days is that standard code basically remains functional "indefinitely", since the standard is effectively frozen. Of course, this requires implementation support, but there are lots of actively maintained and even newer options popping up.

And because extensibility is baked into the standard, the language (or its usage) can "evolve" through libraries in a backwards compatible way, at least a little more so than many other languages (e.g. syntax and object system extension; notable example: Coalton).

Of course there are caveats (like true, performant async programming) and it seems to be a fairly polarizing language in both directions; "best thing since sliced bread!" and "how massively overrated and annoying to use!". But it seems to fit your description decently at least among the software I use or know of.


I respect and understand the appeal of LISP. It is a great example of code not having to change all the time. I personally haven't had a compelling reason to use it (post college), but I'm glad I learned it and I wouldn't be averse to taking a job that required it.

While writing "timeless" code is certainly an ideal of mine, it also competes with the ideals of writing useful code that does useful things for my employer or the goals of my hobby project, and I'm not sure "getting actual useful things done" is necessarily LISP's strong suit, although I'm sure I'm ruffling feathers by saying so. I like more modern programming languages for other reasons, but their propensity to make backward-incompatible changes is definitely a point of frustration for me. Languages improving in backward-compatible ways is generally a good thing; your code can still be relatively "timeless" in such an environment. Some languages walk this line better than others.


I think, the "useful" part is more covered by libraries than everything else, and the stability and flexibility of the core language certainly helps with that. Common Lisp is just not very popular (as every lisp) and does not have a very big ecosystem, that's it.

Another point for stability is about how much a runtime can achieve if it is constantly improved over decades. Look where SBCL, a low-headcount project, is these days.

We should be very vigilant and ask for every "innovation" whether it is truly one. I think it is fair to assume for every person working in this industry for decades that the opinion would be that most innovations are just fads, hype and resume-driven development - the rest could be as well implemented as a library on top of something existing. The most progress we've had was in tooling (rust, go) which does not require language changes per se.

I think, the frustrating part about modern stacks is not the overwhelming amount of novelty, it is just that it feels like useless churn and the solutions are still as mediocre or even worse that what we've had before.


Stability is for sure a very seducing trait. Also I can totally understand the fatigue of the chase for the next almost already obsolete new stuff.

>There's no reason we can't be writing code that lasts 100 years.

There are many reason this is most likely not going to happen. Code despite best effort to achieve separation of concern (in the best case) is a highly contextual piece of work. Even with a simple program with no external library, there is a full compiler/interpreter ecosystem that forms a huge dependency. And hardware platforms they abstract from are also moving target. Change is the only constant, as we say.

>Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago?

Well, that might surprise you, but no, they weren't. At least, they were not dealt with as they are thought and understood today in their contemporary most common presentation. When Babylonians (c. 2000 BCE) solved quadratic equation, they didn't have anything near Descartes algebraic notation connected to geometry, and there is a long series evolution in between, and still to this days.

Mathematicians actually do make a lot of fancy innovative things all the time. Some fundamentals stay stable over millennia, yes. But also some problem stay unsolved for millennia until some outrageous move is done out of the standard.


Don't know about 100 years, but old static web page from lat 90's with js on wayback machine still works. There might be something to this static html css to archive content maybe even little programs.


Yes, and we only need a browser to achieve that, the kind of piece of software well known to be small, light and having only sporadic changes introduced into them. :D

That's actually a good moment to wander about what an amazing they are, really.


To be fair, if math did have version numbers, we could abandon a lot of hideous notational cruft / symbol overloading, and use tau instead of pi. Math notation is arguably considerably worse than perl -- can you imagine if perl practically required a convention of single-letter variable names everywhere? What modern language designer would make it so placing two variable names right next to each other denotes multiplication? Sheer insanity.

Consider how vastly more accessible programming has become from 1950 until the present. Imagine if math had undergone a similar transition.


Math personally "clicked" to me when I started to use Python and R for mathematical operations instead of the conventional arcane notation. I did make me wonder why we insist on forcing kids and young adults to struggle through particularly counter-intuitive ways to express mathematical concepts just because of historical baggage, and I am glad to hear now that I am not the only one who thinks this way.


What in the Hacker News in this comment?

Mathematical notation evolved to its modern state over centuries. It's optimized heavily for its purpose. Version numbers? You're being facetious, right?


>evolved

Yes, it evolved. It wasn't designed.

>Version numbers?

Without version numbers, it has to be backwards-compatible, making it difficult to remove cruft. What would programming be like if all the code you wrote needed to work as IBM mainframe assembly?

Tau is a good case study. Everyone seems to agree tau is better than pi. How much adoption has it seen? Is this what "heavy optimization" looks like?

It took hundreds of years for Arabic numerals to replace Roman numerals in Europe. A medieval mathematician could have truthfully said: "We've been using Roman numerals for hundreds of years; they work fine." That would've been stockholm syndrome. I get the same sense from your comment. Take a deep breath and watch this video: https://www.youtube.com/watch?v=KgzQuE1pR1w

>You're being facetious, right?

I'm being provocative. Not facetious. "Strong opinions, weakly held."


> Without version numbers, it has to be backwards-compatible

If there’s one thing that mathematical notation is NOT, it’s backwards compatible. Fields happily reuse symbols from other fields with slightly or even completely different meanings.

https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo... has lots of examples, for example

÷ (division sign)

Widely used for denoting division in Anglophone countries, it is no longer in common use in mathematics and its use is "not recommended". In some countries, it can indicate subtraction.

~ (tilde)

1. Between two numbers, either it is used instead of ≈ to mean "approximatively equal", or it means "has the same order of magnitude as".

2. Denotes the asymptotic equivalence of two functions or sequences.

3. Often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes.

4. Standard notation for an equivalence relation.

5. In probability and statistics, may specify the probability distribution of a random variable. For example, X∼N(0,1) means that the distribution of the random variable X is standard normal.

6. Notation for proportionality. See also ∝ for a less ambiguous symbol.

Individual mathematicians even are known to have broken backwards compatibility. https://en.wikipedia.org/wiki/History_of_mathematical_notati...

* Euler used i to represent the square root of negative one (√-1) although he earlier used it as an infinite number*

Even simple definitions have changed over time, for example:

- how numbers are written

- is zero a number?

- is one a number?

- is one a prime number?


> Fields happily reuse symbols from other fields with slightly or even completely different meanings.

Symbol reuse doesn't imply a break in backwards compatibility. As you suggest with "other fields", context allows determining how the symbols are used. It is quite common in all types of languages to reuse symbols for different purposes, relying on context to identify what purpose is in force.

Backwards incompatibility tells that something from the past can no longer be used with modern methods. Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it. It wasn't rendered inoperable by modern notation.


> Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it.

But few modern mathematicians can understand it. Given enough data, they can figure out what it means, but that’s similar to (in this somewhat weak analogy) running code in an emulator.

What we can readily make use of are mathematical results from long ago.


> Given enough data, they can figure out what it means

Right, whereas something that isn't backwards compatible couldn't be figured out no matter how much data is given. Consider this line of Python:

   print(5 / 2)
There is no way you can know what the output should be. That is, unless we introduce synthetic context (i.e. a version number). Absent synthetic context we can reasonably assume that natural context is sufficient, and where natural context is sufficient, backwards compatibility is present.

> What we can readily make use of are mathematical results from long ago.

To some degree, but mostly we've translated the old notation into modern notation for the sake of familiarity. And certainly a lot of programming that gets done is exactly that: Rewriting the exact same functionality in something more familiar.

But like mathematics, while there may have been a lot of churn in the olden days when nothing existed before it and everyone was trying to figure out what works, programming notation has largely settled on what is familiar with some reasonable stability and no doubt will only continue to find even greater stability and it matures in kind.


Mathematical notation isn't at all backwards compatible, and it certainly isn't consistent. It doesn't have to be, because the execution environment is the abstract machine of your mind, not some rigidly defined ISA or programming language.

> Everyone seems to agree tau is better than pi. How much adoption has it seen?

> It took hundreds of years for Arabic numerals to replace Roman numerals in Europe.

What on earth does this have to do with version numbers for math? I appreciate this is Hacker News and we're all just pissing into the wind, but this is extra nonsensical to me.

The reason math is slow to change has nothing to do with backwards compatibility. We don't need to institute Math 2.0 to change mathematical notation. If you want to use tau right now, the only barrier is other people's understanding. I personally like to use it, and if I anticipate its use will be confusing to a reader, I just write `tau = 2pi` at the top of the paper. Still, others have their preference, so I'm forced to understand papers (i.e. the vast majority) which still use pi.

Which points to the real reason math is slow to change: people are slow to change. If things seem to be working one way, we all have to be convinced to do something different, and that takes time. It also requires there to actually be a better way.

> Is this what "heavy optimization" looks like?

I look forward to your heavily-optimized Math 2.0 which will replace existing mathematical notation and prove me utterly wrong.


If the compiler forbade syntactic ambiguity from implicit multiplication and had a sensible LSP allowing it to be rendered nicely, I don't think that'd be such a bad thing. Depending on the task at hand you might prefer composition or some other operation, but when reducing character count allows the pattern recognition part of our brain to see the actual structure at hand instead of wading through character soup it makes understanding code much easier.


Yep, this explains why the APL programming language was so ridiculously successful.


> There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.

Not sure this is the best example. Mathematical notation evolved a lot in the last thousand years. We're not using roman numerals anymore, and the invention of 0 or of the equal sign were incredible new features.


> Mathematical notation evolved a lot in the last thousand years

That is not counter to what I'm saying.

    Mathematical notation <=> Programming Languages.

    Proofs <=> Code.
When mathematical notation evolves, old proofs do not become obsolete! There is no analogy to a "breaking change" in math. The closest we came to this was Godel's Incompleteness Theorem and the Cambrian Explosion of new sets of axioms, but with a lot of work most of math was "re-founded" on a set of commonly accepted axioms. We can see how hostile the mathematical community is to "breaking changes" by seeing the level of crisis the Incompleteness Theorem caused.

You are certainly free to use a different set of axioms than ZF(C), but you need to be very careful about which proofs you rely on; just as you are free to use a very different programming language or programming paradigm, but you may be limited in the libraries available to you. But if you wake up one morning and your code no longer compiles, that is the analogy to one day mathematicians waking up and realizing that a previously correct proof is now suddenly incorrect -- not that it was always wrong, but that changes in math forced it into incorrectness. It's rather unthinkable.

Of course programming languages should improve, diversify, and change over time as we learn more. Backward-compatible changes do not violate my principle at all. However, when we are faced with a possible breaking change to a programming language, we should think very hard about whether we're changing the original intent and paradigms of the programming language and whether we're better off basically making a new spinoff language or something similar. I understand why it's annoying that Python 2.7 is around, but I also understand why it'd be so much more annoying if it weren't.

Surely our industry could improve dramatically in this area if it cared to. Can we write a family of nested programming languages where core features are guaranteed not to change in breaking ways, and you take on progressively more risk as you use features more to the "outside" of the language? Can we get better at formalizing which language features we're relying on? Better at isolating and versioning our language changes? Better at time-hardening our code? I promise you there's a ton of fruitful work in this area, and my claim is that that would be very good for the long-term health and maturation of our discipline.


> When mathematical notation evolves, old proofs do not become obsolete! There is no analogy to a "breaking change" in math.

I disagree. The development of non-euclidean geometry broke a lot of theorems that were used for centuries but failed to generalize. All of a sudden, parallels could reach each other.

> Can we write a family of nested programming languages where core features are guaranteed not to change in breaking ways, and you take on progressively more risk as you use features more to the "outside" of the language?

We could, the problem is everyone disagrees on what that core should be. Should it be memory-efficient? Fast? Secure? Simple? Easy to formally prove? Easy for beginners? Work on old architecture? Work on embedded architecture? Depending on who you ask and what your goals are, you'll pick a different set of core features, and thus a different notation for your core language.

That's the difference between math & programming languages. Everyone agrees on math's overall purpose. It's a tool to understand, formalise and reason about abstractions. And mathematical notation should make that easier.

That being said, the most serious candidate for your "core language guaranteed not to change and that you can build onto" would be ANSI C. It's been there more more than 35 years, is a standard, is virtually everywhere, you can even write a conforming compiler for a brand new architecture, even an embedded microchip very easily, and most of not all the popular languages nowadays are build on it (C++ of course, but also C#, java, javascript, python, go, php, perl, haskell, rust, all have a C base), and they all use a C FFI. I'm not sure ANSI C was the best thing that ever happened to our industry, though.


> Should it be memory-efficient? Fast? Secure? Simple? Easy to formally prove? Easy for beginners? Work on old architecture? Work on embedded architecture?

What do any of these have to do with guarantees of long-term compatibility? I'm not arguing that there should be One Programming Language To Rule Them All, I'm asking about whether we can design better guarantees about long-term compatibility into new programming languages.


The biggest source of incompatibility isn't in the programming languages. It's either in the specifications ("hmm maybe one byte isn't enough after all for a character, let's break all those assumptions") or in the hardware ("maybe 16 bits isn't enough of address space").


> There's no reason we can't be writing code that lasts 100 years. Code is just math

Math is continually updated, clarified and rewritten. 100 years ago was before the Bourbaki group.


> Math is continually updated, clarified and rewritten

And yet math proofs from decades and centuries ago are still correct. Note that I said we write "code that lasts", not "programming languages that never change". Math notation is to programming languages as proofs are to code. I am not saying programming languages should never change or improve. I am saying that our entire industry would benefit if we stopped to think about how to write code that remains "correct" (compiling, running, correct behavior) for the next 100 years. Programming languages are free to change in backward-compatible ways, as long once-correct code is always-correct. And it doesn't have to be all code, but you know what they say: there is nothing as permanent as a temporary solution.


> an insatiable hunger for newness over completeness or correctness.

I understand some of your frustration, but often the newness is in response to a need for completeness or correctness. "As we've explored how to use the system, we've found some parts were missing/bad and would be better with [new thing]". That's certainly what's happening with Python.

It's like the Incompleteness Theorem, but applied to software systems.

It takes a strong will to say "no, the system is Done, warts and missing pieces and all. Deal With It". Everyone who's had to deal with TeX at any serious level can point to the downsides of that.


If you look at old math treatises from important historical people you'll notice that they use very different notation from the one you're used to. Commonly concepts are also different, because those we use are derived over centuries from material produced without them and in a context where it was traditional to use other concepts to suss out conclusions.

But you have a point, and it's not just "our industry", it's society at large that has abandoned the old in favour of incessant forgetfulness and distaste for tradition and history. I'm by no means a nostalgic but I still mourn the harsh disjoint between contemporary human discourse and historical. Some nerds still read Homer and Cicero and Goethe and Ovid and so on but if you use a trope from any of those that would have been easily recognisable as such by europeans for much of the last millenium you can be quite sure that it won't generally be recognised today.

This also means that a lot of early and mid-modern literature is partially unavailable to contemporary people, because it was traditional to implicitly use much older motifs and riff on them when writing novels and making arguments, and unless you're aware of that older material you'll miss out on it. For e.g. Don Quixote most would need an annotated version which points out and makes explicit all the references and riffing, basically destroying the jokes by explaining them upfront.


Worth noting that few people use the TeX executable as specified by Knuth. Even putting aside the shift to pdf instead of dvi output, LaTeX requires an extended TeX executable with features not part of the Knuth specification from 1988.

Btw, equations and polynomials while conceptually are old, our contemporary notation is much younger, dating to the 16th century, and many aspects of mathematical notation are younger still.


This philosophy may have its place in some communities, but Python is definitely not one of them.

Even C/C++ introduces breaking changes from time to time (after decades of deprecation though).

There’s no practical reason why Python should commit to a 100+ year code stability, as all that comes at a price.

Having said that, Python 2 -> 3 is a textbook example of how not to do these things.


Python is pretty much on the other extreme as 3.x → 3.y should be expected to break things, there's no "compability mode" to not break things, and the reasons for the breakage can be purely aestetic bikeshedding

C in contrast generally versions the breaking changes in the standard, and you can keep targeting an older standard on a newer compiler if you need to, and many do


While i think Latex is fantastic, i think there is plenty of low hanging fruit to improve upon it... the ergonomics of the language and its macros aren't great. If nothing else there should be a better investment in tooling and ecosystem.


Some stuff like LAPACK and BLAS fit your bill. They are math libraries written decades ago and still in use.


LAPACK and OpenBLAS regularly release new versions


Mathematical notion has changed over the years. Is Diophantus' original system of polynomials that legible to modern mathematicians? (Even if you ignore the literally being written in ancient greek part.)


I agree somewhat with your sentiment and have some nostalgia for a time when software could be finished, but the comment you're replying to was making a joke that I think you may have missed.


> There's no reason we can't be writing code that lasts 100 years. Code is just math.

The weather forecast is also “just math”, yet yesterday’s won’t be terribly useful next April.


No, weather forecasting models are "just math". The forecast itself is an output of the model. I sure hope our weather forecasting models are still useful next year!

    weather forecasting models <=> code <=> math

    weather forecast <=> program output <=> calculation results
So all you're saying is that we should not expect individual weather forecasts, program output, and calculation results to be useful long-term. Nobody is arguing that.


That's why I said "[yesterday's] weather forecast" and not "weather forecast models".

But my larger point actually also stands: Weather forecast models also, in the end, incorporate information about geography, likely temperature conditions etc., and might not be stable over 100 years.

The more interesting question is probably: Is Python more like the weather or a weather forecasting model? :)


Kinda related question, but is code really just a math? Is it possible to express things like user input, timings, inteerupts, error handling, etc. as math?


I would slightly sort of disagree that code is just math when you really boil it down, however, if you take a simple task, say, printing hello world to the output, you could actually break that down into a mathematical process. You can mathematically say at time T value of O will be the value of index N of input X, so over a period of time you eventually get "hello world" as the final output

Howeveeerrr.. its not quite math when you break down to the electronics level, unless you go really wild (wild meaning physics math). take a breakdown of python to assembly to binary that flips the transistors doing the thing. You can mathematically define that each transistor will be Y when that value of O is X(N); btw sorry i can't think of a better way to define such a thing from mobile here. And go further by defining voltages to be applied, when and where, all mathematically.

In reality its done in sections. At the electronic level math defines your frequency, voltage levels, timing, etc; at the assembly level, math defines what comparisons of values to be made or what address to shift a value to and how to determine your output; lastly your interpreter determines what assembly to use based on the operations you give it, and based on those assembly operations, ex an "if A == B then C" statement in code is actually a binary comparator that checks if the value at address A is the same as the value at address B.

You can get through a whole stack with math, but much of it has been abstracted away into easy building blocks that don't require solving a huge math equation in order to actually display something.

You can even find mathematical data among datasheets for electronic components. They say (for example) over period T you cant exceed V volts or W watts, or to trigger a high value you need voltage V for period T but it cannot exceed current I. You can define all of your components and operations as an equation, but i dont think its really done anymore as a practice, the complexity level of doing so (as someone not building a cpu or any ic) isnt useful unless youre working on a physics paper or quantum computing, etc etc


Isn’t it possible to express anything as math? With sufficient effort that is.


My C++ from 2005 still compiles! (I used boost 1.32)

Most of my python from that era also works (python 3.1)

The problem is not really the language syntax, but how libraries change a lot.


> This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years.

I dunno man, there's an equal amount of bullshit that still exists only because that's how it was before we were born.

> Code is just math.

What?? No. If it was there'd never be any bugs.


> > Code is just math.

> What?? No. If it was there'd never be any bugs.

Are you claiming there is no incorrect math out there? Go offer to grade some high-school algebra tests if you'd like to see buggy math. Or Google for amateur proofs of the Collatz Conjecture. Math is just extremely high (if not all the way) on the side of "if it compiles, it is correct", with the caveat that compilation only can happen in the brains of other mathematicians.


That's human error. "Correctness vs. mistakes" applies to all human languages too, English etc.

In math, `a - b` doesn't occasionally become `b - a` if one CPU/thread/stream finishes before an other, just to give one example.

Or, if you write `1 + 2` it will forever be `1 + 2`, unlike code where it may become `3 / 4 - 5 + 6 ^ 7 + 1 + 2` or whatever junk gets appended before or after your expression tomorrow (analogy for the OS/environment your code runs in)

I guess to put it simply: code is affected by its environment, math isn't.


Except uh, nobody uses infinitesimals for derivatives anymore, they all use limits now. There's still some cruft left over from the infinitesimal era, like this dx and dy business, but that's just a backwards compatibility layer.

Anyhoo, remarks like this are why the real ones use Typst now. TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.


> the real ones use Typst now

Are you intentionally leaning into the exact caricature I'm referring to? "Real programmers only use Typstly, because it's the newest!". The website title for Typst when I Googled it literally says "The new foundation for documents". Its entire appeal is that it's new? Thank you for giving me such a perfect example of the symptom I'm talking about.

> TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.

You've listed two real issues (difficult to use, difficult to integrate), and two rooted firmly in recency bias (stagnant, not written in Rust). If you can find a typesetting library that is demonstrably better in the ways you care about, great! That is not an argument that TeX itself should change. Healthy competition is great! Addiction to change and newness is not.

> nobody uses infinitesimals for derivatives anymore, they all use limits now

My point is not that math never changes -- it should, and does. However, math does not simply rot over time, like code seems to (or at least we simply assume it does). Math does not age out. If a math technique becomes obsolete, it's only ever because it was replaced with something better. More often, it forks into multiple different techniques that are useful for different purposes. This is all wonderful, and we can celebrate when this happens in software engineering too.

I also think your example is a bit more about math pedagogy than research -- infinitesimals are absolutely used all the time in math research (see Nonstandard Analysis), but it's true that Calculus 1 courses have moved toward placing limits as the central idea.


>My point is not that math never changes -- it should, and does. However, math does not simply rot over time, like code seems to (or at least we simply assume it does). Math does not age out.

Just in the same sense that CS does not age out. Most concepts stick, but I'm pretty sure you didn't go through Στοιχεία (The Elements) in its original version. I'm also pretty confident that most people out there that use many of the notion it holds and helped to spread never threw their eyes over a single copy of it in their native language.


> I'm pretty sure you didn't go through Στοιχεία (The Elements) in its original version

This is like saying "you haven't read the source code of the first version of Linux". The only reason to do that would be for historical interest. There is still something timeless about it, and I absolutely did learn Euclid's postulates which he laid down in those books, all 5 of which are still foundational to most geometry calculations in the world today, and 4 of which are foundational to even non-Euclidean geometry. The Elements is a perfect example of math that has remained relevant and useful for thousands of years.


So that's it. Just because new languages and framework are rising and fading away, it doesn't mean there is nothing kept all along the way. It just that specific implementation is not the thing that people deem the most important to preserve over time.


> The website title for Typst when I Googled it literally says "The new foundation for documents". Its entire appeal is that it's new?

It might not be the best tagline, but that is most certainly not the entire appeal of Typst. It is a huge improvement over Latex in many ways.


Even if Typst was going to replace TeX everywhere right now, about half a century would still be a respectable lifespan for a software project.


> nobody uses infinitesimals for derivatives anymore

All auto-differentiation libraries today are built off of infinitesimals via Dual numbers. Literally state of the art.


The latter is what happened, for upwards of ten years. And it wasn't a small fraction of the funding -- almost no funding was allocated to any research not looking at amyloid plaques, because the intellectual giants' (falsified) research was showing that that was by far the most promising avenue to explore.


Again, there were a couple of instances of fraud in one sub-branch of a sub-branch of a field ages ago, and The Internet has decided to turn it into a conspiracy theory.

On the off chance that you're actually a decent, honest person, I urge you to actually go learn about this theory that you're promoting. Go out and actually figure out the truth, because if it turns out that you're wrong about the impact and intentions, you're helping to harm a lot of people.


> You can call it "evil" if you want

It was a concerted and intentional effort to fake data and falsify research into a pervasive deadly disease, specifically in order to hoard funds going to research they knew was, if not a dead end, at least not nearly as promising as they were claiming, preventing those funds from going to other research groups that might actually make progress, essentially stealing donations from a charity, and using their power and clout to attack the reputations of anyone who challenged them. They directly and knowingly added some X years to how long it will take to cure this disease, with X being at least 2 and possibly as much as 10. When Alzheimer's is finally cured, add up all the people who suffered and died from it in the X years before that point, and this research team is directly and knowingly responsible for all of that suffering. Yes, I think I will call it absolutely fucking evil.


Yes, I share some of your concerns about groupthink and research cliques. The best and balanced critique is Karl Herrup’s book at MIT Press: “How Not To Study a Disease”.

Your comment is over the top with respect to NIH-funded researchers doing Alzheimer’s research. The emotion would be entirely appropriate if directed at RJ Reynolds Inc. and other cigarette companies or Purdue Pharma. Those are evil companies that many governments have tolerated killing for profit—Purdue Pharma and the OxyContin disaster alone about 500,000 Americans over 20 years; and US tobacco companies contribute to about as many excess deaths per year.

The systematic image manipulation by a postdoctoral fellow, Dr. Sylvain Lesné, was egregious and worthy of jail time but he was a truly exceptional case and polluted 20 or more papers that did distract the entire field. You can read all about it here.

PMID: 35862524 Piller C. Blots on a field? Science. 2022 377:358-363. doi: 10.1126/science.add9993


Again, I urge you to read this article and stop promoting conspiracy nonsense. I take this very personally because my mom has this disease, and people who clearly don't understand what they're talking about fuming about conspiracies really doesn't help anyone, and probably hurts them immeasurably. https://www.astralcodexten.com/p/in-defense-of-the-amyloid-h...


Let's be careful about exactly what we mean by this. When we say an algorithm is O(f(N)), we need to be very clear about what N we're talking about. The whole point is to focus on a few variables of interest (often "number of elements" or "total input size") while recognizing that we are leaving out many constants: CPU speed, average CPU utilization, the speed of light, and (usually) the size of memory. If I run a task against some data and say "good job guys, this only took 1 second on 1000 data points! Looks like our work here is done", it would be quite unnerving to learn that the algorithm is actually O(3^N). 1000 data points better be pretty close to the max I'll ever run it on; 2000 data points and I might be waiting until the heat death of the universe.

I'm seeing some commenters happily adding 1/3 to the exponent of other algorithms. This insight does not make an O(N^2) algorithm O(N^7/3) or O(N^8/3) or anything else; those are different Ns. It might be O(N^2 + (N*M)^1/3) or O((N * M^(1/3))^2) or almost any other combination, depending on the details of the algorithm.

Early algorithm design was happy to treat "speed of memory access" as one of these constants that you don't worry about until you have a speed benchmark. If my algorithm takes 1 second on 1000 data points, I don't care if that's because of memory access speed, CPU speed, or the speed of light -- unless I have some control over those variables. The whole reason we like O(N) algorithms more than O(N^2) ones is because we can (usually) push them farther without having to buy better hardware.

More modern algorithm design does take memory access into account, often by trying to maximize usage of caches. The abstract model is a series of progressively larger and slower caches, and there are ways of designing algorithms that have provable bounds on their usage of these various caches. It might be useful for these algorithms to assume that the speed of a cache access is O(M^1/3), where M is the size of memory, but that actually lowers their generality: the same idea holds between L2 -> L3 cache as L3 -> RAM and even RAM -> Disk, and certainly RAM -> Disk does not follow the O(M^1/3) law. See https://en.wikipedia.org/wiki/Cache-oblivious_algorithm

So basically this matters for people who want some idea of how much faster (or slower) algorithms might run if they change the amount of memory available to the application, but even that depends so heavily on details that it's not likely to be "8x memory = 2x slower". I'd argue it's perfectly fine to keep M^(1/3) as one of your constants that you ignore in algorithm design, even as you develop algorithms that are more cache- and memory-access-aware. This may justify why cache-aware algorithms are important, but it probably doesn't change their design or analysis at all. It seems mainly just a useful insight for people responsible for provisioning resources who think more hardware is always better.


> linear search worse case is not O(N), it is O(N^4/3)

No, these are different 'N's. The N in the article is the size of the memory pool over which your data is (presumably randomly) distributed. Many factors can influence this. Let's call this size M. Linear search is O(N) where N is the number of elements. It is not O(N^4/3), it is O(N * M^1/3).

There's a good argument to be made that M^(1/3) should be considered a constant, so the algorithm is indeed simply O(N). If you include M^(1/3), why are you not also including your CPU speed? The speed of light? The number of times the OS switches threads during your algorithm? Everyone knows that an O(N) algorithm run on the same data will take different speeds on different hardware. The point of Big-O is to have some reasonable understanding of how much worse it will get if you need to run this algorithm on 10x or 100x as much data, compared to some baseline that you simply have to benchmark because it relies on too many external factors (memory size being one).

> All regularly cited algorithm complexity classes are based on estimating a memory access as an O(1) operation

That's not even true: there are plenty of "memory-aware" algorithms that are designed to maximize the usage of caching. There are abstract memory models that are explicitly considered in modern algorithm design.


You can model things as having M be a constant - and that's what people typically do. The point is that this is a bad model, that breaks down when your data becomes huge. If you're tying to see how an algorithm will scale from a thousand items to a billion items, then sure - you don't really need to model memory access speeds (though even this is very debatable, as it leads to very wrong conclusions, such as thinking that adding items to the middle of a linked list is faster than adding them to the middle of an array, for large enough arrays - which is simply wrong on modern hardware).

However, if you want to model how your algorithm scales to petabytes of data, then the model you were using breaks down, as the cost of memory access for an array that fits in RAM is much smaller than the cost of memory access for the kind of network storage that you'll need for this level of data. So, for this problem, modeling memory access as a function of N may give you a better fit for all three cases (1K items, 1G items, and 1P items).

> That's not even true: there are plenty of "memory-aware" algorithms that are designed to maximize the usage of caching.

I know they exist, but I have yet to see any kind of popular resource use them. What are the complexities of Quicksort and Mergesort in a memory aware model? How often are they mentioned compared to how often you see O(N log N) / O(N²)?


> It is not O(N^4/3), it is O(N * M^1/3). There's a good argument to be made that M^(1/3) should be considered a constant

Math isn't mathing here. If M^1/3 >> N, like in memory-bound algorithms, then why should we consider it a constant?

> The point of Big-O is to have some reasonable understanding of how much worse it will get if you need to run this algorithm on 10x or 100x as much data

And this also isn't true and can be easily proved by contradiction. O(N) linear search over array is sometimes faster than O(1) search in a hash-map.


I have been coding in C# for 16 years and I have no idea what you mean by "hidden indirection and runtime magic". Maybe it's just invisible to me at this point, but GC is literally the only "invisible magic" I can think of that's core to the language. And I agree that college-level OOP principles are an anti-pattern; stop doing them. C# does not force you to do that at all, except very lightly in some frameworks where you extend a Controller class if you have to (annoying but avoidable). Other than that, I have not used class inheritance a single time in years, and 98% of my classes and structs are immutable. Just don't write bad code; the language doesn't force you to su all.


Hidden indirection & runtime magic almost always refer to DI frameworks.

Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.

My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.

  > Just don't write bad code
Reminds me with C advices: "Just don't write memory leaks & UAF!".


"Just don't write bad code" means that you can easily avoid some of the anti-patterns that people list as weaknesses of C#. Yes, maybe you inherit code where people do those things, but how much of that is because of C#, and how much is due to it being popular? Any popular language is going to have bucketloads of bad code written in it. Alternatively: "you can write bad code in any language". I'm far more interested in languages that help you write great code than those that prevent you from writing bad code. (Note that I view static typing in the "help you write great code" category -- I am distinguishing "bad code" from "incorrect code" here.)

Yes, some programming languages have more landmines and footguns than others (looking at you, JS), and language designers should strive to avoid those as much as possible. But I actually think that C# does avoid those. That is: most of what people complain about are language features that are genuinely important and useful in a narrow scope, but are abused / applied too broadly. It would be impossible to design a language that knows whether you're using Reflection appropriately or not; the question is whether their inclusion of Reflection at all improves the language (it does). C# chose to be a general-purpose, multi-paradigmatic language, and I think they met that goal with flying colors.

> Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd

The question is: does the DI framework reduce the overall complexity or not? Good DI frameworks are built on a very small number of (yes, "magic") conventions that are easy to learn. That being said, bad DI frameworks abound.

And can you imagine any other industry where having to read a few pages of documentation before you understood how to do engineering was looked upon with such derision? WTF is wrong with newcomers having to read a few pages of documentation!?


> Just don't write bad code;

If we're writing good code then why do we even need a GC? Heh.

In decades of experience I've never once worked in an organisation where "don't write bad code" applied. I have seen people with decades of experience with C# who don't know that IQuerable and IEnumerable load things into memory differently. I don't necessarily disagree with you that people should "just write good code", but the fact is that most of us don't do that all the time. I guess you could also argue that principles like "foureyes" would help, but it doesn't, even when they are enforced by leglisation with actual risk of punishments like DORA or NIS2.

This is the reason I favour Go as a cross platform GC language over C#, because with Go are given fewer opportunities to fuck up. There is still plenty of chance to do it, but fewer than other GC languages. At least on the plusside for .NET 10 they're going to improve IEnumerable with their devirtualization.

> hidden indirection and runtime magic"

Maybe not in C#, but C# is .NET and I don't think it's entirely fair to decouple C# from .NET and it's many frameworks. Then again, I could have made it more clear.


Some examples:

- Attributes can do a lot of magic that is not always obvious or well documented.

- ASP.NET pipeline.

- Source generators.

I love C#, but I have to admit we could have done with less “magic” in cases like these.


Attributes do nothing at all on their own. It's someone else's code that does magic by reflecting on your types and looking for those attributes. That may seem like a trivial distinction, but there's a big difference between "the language is doing magic" and "some poorly documented library I'm using is doing magic". I rarely use and generally dislike attributes. I sometimes wonder if C# would be better off without them, but there are some legitimate usages like interop with unmanaged code that would be really awkward any other way. They are OK if you think of them as a weakly enforced part of the type system, and relegate their use to when a C# code object is representing something external like an API endpoint or an unmanaged call. Even this is often over-done.

Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.

Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.

Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.

That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).


> But I haven't used source generators in C# in at least 10 years

Source generators didn't exist in C# 10 years ago. You probably had something else in mind?


Huh? 10 years ago was 2015. Entity Framework had been around for nearly 7 years by then, and as far back as I remember using it, it used source generators to subclass your domain models. Even with "Code First", it generated subclasses for automatic property tracking. The generated files had a whole other extension, like .g.cs or something (it's been a while), and Visual Studio regenerated them on build. I eventually figured out how to use it effectively without any of the silly code generation magic, but it took effort to get it to not generate code.

ASP.NET MVC came with T4 templates for scaffolding out the controllers and views, which I also came to view as an anti-pattern. This stuff was in the official Microsoft tutorials. I'm really not sure why you think these weren't around?


That's lowercase "source generators". They're referring to title-case "C# Source Generators", an actual feature slowly replacing some uses of reflection.

https://devblogs.microsoft.com/dotnet/introducing-c-source-g...


    > But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
A challenge with .NET web APIs is that it's not possible to detect when interacting with a payload deserialized from JSON whether it's `null` because it was set to `null` or `null` because it was not supplied.

A common way to work around this is to provide a `IsSet` boolean:

    private bool _isNameSet;

    public string? Name { get; set { ...; isNameSet = true; } }
Now you can check if the value is set.

However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.

    public partial string? Name { get; set; }
Now a single marker attribute will generate as many `Is*Set` properties as needed.

Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.


That is tedious with or without a source generator, mainly because there's a much better way to do it:

    public Optional<string> Name;
With Optional being something like:

    class Optional<T> {
      public T? Value;
      public bool IsSet;
    }
I'm actually partial to using IEnumerable for this, and I'd reverse the boolean:

    class Optional<T> {
      public IEnumerable<T> ValueOrEmpty;
      public bool IsExplicitNull;
    }
With this approach (either one) you can easily define Map (or "Select", if you choose LINQ verbiage) on Optional and go delete 80% of your "if" statements that are checking that boolean.

Why mess with source generators? They're just making it slightly easier to do this in a way that is really painful.

I'd strongly recommend that if you find yourself wanting Null to represent two different ideas, then you actually just want those two different ideas represented explicitly, e.g. with an Enum. Which you can still do with a basic wrapper like this. The user didn't say "Null", they said "Unknown" or "Not Applicable" or something. Record that.

    public OneOf<string, NotApplicable> Name
A good OneOf implementation is here (I have nothing to do with this library, I just like it):

https://github.com/mcintyre321/OneOf

I wrote a JsonConverter for OneOf and just pass those over the wire.


I don't really consider any of these magic, particularly source generators.

It's just code that generates code. Some of the syntax is awkward, but it's not magic imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: