> It's not that philosophy is dumb. It's just that it's useless.
If I take a moment and think through that statement, I highly doubt you even believe this. And if it is sincere, then it just reinforces the point of the person that you were replying to.
Do you think logic is useless? What about ethics and politics? What counts as science? What counts as knowledge? These are all philosophical inquiries.
Even your statement that it's "useless" is a philosophical judgement. What makes something "useful" and not a waste of time? I guess you can make the argument that certain fields within philosophy are useless, but at this point, you're already doing philosophy again.
You can try and ignore philosophy, but you're not going to avoid doing it. At worse, you'll just be doing it poorly.
When working with a larger code base, there will always be parts that you don't remember writing and you'll inevitably have to read the code to understand it. That's just part of the job/task, regardless of the style it's written in.
In shared code particularly with a culture of refactoring, there's no guarantee that the function call you see is doing what you remember it doing a year ago.
When I was coming up I got gifted a bunch of modules at several jobs because the original writer couldn't be arsed to keep up with the many incremental changes I'd been making. They had a mentality that code was meant to be memorized instead of explored, and I was just beginning to understand code exploration from a writer's perspective. So they were both on the wrong side of history and the wrong side of me. Fuck it, if you want it so much, kid, it's yours now. Good luck.
"Make the change easy, then make the easy change" hadn't even been coined as a phrase yet when I discovered the utility of that behavior. When I read 'Refactoring (Fowler)' it was more like psychotherapy than a roadmap to better software. "So that's why I am like this."
When we get unstuck on a problem it's usually due to finding a new perspective. Sometimes those come as epiphanies, but while miracles do happen, planning on them leads to disappointment. Sometimes you just have to do the work. Finding new perspectives 'the hard way' involves looking at the problem from different angles, and if explaining it to someone else doesn't work, then often enough just organizing a block of code will help you stumble on that new perspective. And if that also fails, at least the code is in better shape now.
Not long after I figured out how to articulate that, my writer friend figured out the same thing about creative writing, so I took it as a sign I was on the right track.
I do know that the first time I was doing that, it was for performance reasons. I was on a project that was so slow you could see the pixels painting. My first month on that project I was doing optimizations by saying "1 Mississippi" out loud. The second month I used a timer app. I was three months in before I even needed to print(end - start).
> there's no guarantee that the function call you see is doing what you remember it doing a year ago.
TDD provides those guarantees. If someone changes the behaviour of the function you will soon know about it.
That's significant because Robert 'Clean' Martin sells clean code as a solution to some of the problems that TDD creates. If you reject TDD, clean code has no relevance to your codebase. As Casey does not seem to practice TDD, it is not clear why he though clean code would apply to his work?
It doesn't. TDD is about writing new code. It doesn't say anything about existing tests being sacrosanct, or pinning tests sticking around forever. I can extract code from a function and write tests for it. I probably know that there's still code that checks for user names but I can't guarantee that this code is being called from function X anymore, or whether it's before or after calling function Y. Those are the sorts of things people try to memorize about code. "What are the knock-on effects of this function" doesn't always survive refactoring. Particularly when the refactoring is because we have a new customer who doesn't want Y to happen at all. So now X->Y is no longer an invariant of the system.
TDD is about documenting behaviour. Which is why it was later given the name Behaviour Driven Development (BDD), to dispel the myths that it is about testing. It is true that you need to document behaviour before writing code, else how would you know what to write? Even outside of TDD you need to document the behaviour some way before you can know what needs to be written.
A function's behaviour should have no reason to change after its behaviour is documented. You can change the implementation beneath to your hearts content, but the behaviour should be static. If someone attempts to change the behaviour, you are going to know about it. If you are not alerted to this, your infrastructure needs improvement.
> A function's behaviour should have no reason to change after its behaviour is documented.
That's only true with spherical cows. That something happens is a requirement. When it happens is often only as specific as 'before' or 'after' but tests often dictate that they happen 'between', which is not an actual requirement, it's an accident of implementation. It was 'easy' to put it here.
Nowhere is it written that behavior in a system is strictly additive.
Systems are full of XY problems. When you recognize that, and start addressing that problem, you sprout a lot of tests for the Y solution and block delete tests for the X solution. That behavior doesn't exist in the system anymore because it's answering the wrong question. Functional parity tests can be copied, or written in parallel. But the old tests disappear with the old code (when the feature toggle goes away).
Leaving the code for X around is at best a footgun for new devs, and at worse a sign of hoarding behavior of an intensity that requires therapy.
You're espousing a process whereby you've nailed one foot to the deck, preferring form over function. Whether you believe what you're saying or not I can't say, but it's restrictive and harmful.
> Nowhere is it written that behavior in a system is strictly additive.
For a unit of the same identity to suddenly start doing something different is plain nonsensical, never mind the technical challenges that come with breaking behaviour that should scare anyone away from trying. Logically, a unit is additive until the unit are no longer used, at which point it can be eliminated.
> But the old tests disappear with the old code (when the feature toggle goes away).
Absolutely, but static analysis can easily determine that the tests being removed correspond with units being removed. If (TDD) tests are removed and the unit code isn't, something has gone wrong and your infrastructure should make this known.
Refactors compose. In three months you can completely rearchitect a module without breaking it at any point in the process. That’s the promise of refactoring.
Functions don’t have an identity. There is no such thing. I don’t know who taught you that but they have broken you in the process. Renaming things is a refactoring. We don’t check the entire commit history to make sure that function name has never existed. Only that it hasn’t existed recently. There’s no identity.
One of the reasons to refactor is that the function has been lying about its responsibilities. So you extract steps out of it, create a new call path that fixes the discrepancy, migrate the call sites, delete the incorrect function, and then, if the function name was really good, you might wait a while and rename the new function to the old name. Each step makes sense if you’ve followed the entire process. If you haven’t been following along at all then you have absolutely no idea how things got here until you read the git history thoroughly, which some people can’t do, and others won’t do if they expect the code to be static.
Also, to clarify, I'm talking about cumulative changes. If I'm working with someone on a feature then we both see all of the changes as they occur. If I'm off dealing with some long initiative, I may not look at that code for 3 months and so I miss all of the intermediate states that made perfect sense at the time.
Like visiting a friend who did their own house remodel. Their spouse saw all the steps, all you saw was before and after, and so the fact that the bathroom door is missing is confusing. The bathroom still exists, but now it's the master bath.
You seem to ignore that when the unit changes, the tests do too. If you come back a year later, foo.bar.baz(quux) might have been refactored and lazily so. The tests were also updated and still pass. You may jump into the code only to realize that someone no-op'd everything and never removed call sites. TDD is primarily a design tool, not a lock-into-implementation tool.
Of course any code requires some refresher at time, but the difficulty and time required to figure it out again is a spectrum that goes all the way down to the seventh circle of hell.
It's an unpopular opinion depending on what you're recommending.
Telling someone "read the Nicomachean Ethics by Aristotle" will lead someone to just pick up any copy of the Nicomachean Ethics. There are a lot of translations of the Nicomachean Ethics and they are not all equal. They range from very good translations, to idiosyncratic readings of the text, to flat out bad translations.
Beyond poor translations, the ancients that you've recommended are good to start with and the recommendation to read the primary sources of them is just fine. Those texts are easy to digest without having a formal background in philosophy.
But for other philosophers (e.g. Nietzsche, Kierkegaard, Hegel), telling someone to just "pick up one of their primary texts" is a disaster. It will either (a.) be complete gibberish to the reader without any context and they may just give up, or worse, (b.) they think they'll understand something without the proper context and spew nonsense in regards to that philosopher (this is why there are so many bad readings of Nietzsche).
So depending on what you're recommending, primary sources can be good, but in my experience, primary sources aren't good most of the time. Moreover, if someone is interested in a specific field like the OP is, then having a good secondary source can be extremely helpful to give someone an overview and proper understanding of the topic.
"for other philosophers (e.g. Nietzsche, Kierkegaard, Hegel), telling someone to just "pick up one of their primary texts" is a disaster. It will either (a.) be complete gibberish to the reader without any context and they may just give up, or worse, (b.) they think they'll understand something without the proper context and spew nonsense in regards to that philosopher (this is why there are so many bad readings of Nietzsche)."
This is why you'll want to read those primary sources as part of a class, instead of just trying to go it alone.
But even that won't save you from "bad readings" of Nietzsche or any other philosopher, as plenty of experts disagree on what he meant. He, like some other philosophers, just wrote in a way that doesn't have one obvious meaning that everyone can agree on. With experience and study, you can make up your own mind, which will be better than swallowing some other person's pre-digested interpretation of him.
You might have an easier time understanding a secondary source's interpretation of Nietzsche, but that doesn't mean that you understand Nietzsche.
My post was mainly in regards to the recommendation of having someone just pick up a primary source and just start reading it. But I agree that, if you're taking a class and you have someone who can go through the text with you, then that's the best option instead of trying to go it alone. But that's different from just sending someone straight to the primary source alone.
And of course there are many interpretations of Nietzsche and there's reasonable disagreement on what he said. You're right that a secondary source or taking a class doesn't "save you from bad readings" of him, but it's still better than trying to go it alone.
There are many flat out wrong interpretations of him, and someone like a professor or a secondary source can definitely help avoid common misunderstandings and pitfalls when trying to read him.
"There are many flat out wrong interpretations of him, and someone like a professor or a secondary source can definitely help avoid common misunderstandings and pitfalls when trying to read him."
That really depends on who you read. If you read only a secondary source instead of the primary source, and that secondary source happens to have misinterpreted the primary, you're going to be misled.
If you read the primary source you're at least going to have the chance to make up your own mind, even if it's difficult to do so... and even if you can't, you might at least see that what the primary source actually says might not be as straightforward and obvious as the secondary source maintains.
But please don't think I'm against secondary sources altogether. They can be a useful adjunct to reading the primary sources. Ideally, though, you'd have multiple secondary sources (ones that disagree with one another), so you don't fall in to some one person's reality tunnel.
This is especially important in philosophy. I can't count the number of times I've read secondary sources which I consider to have completely misunderstood the primary sources they were commenting on, and how frequently secondary sources disagree with one another (especially on the more "difficult" philosophers).
> Our next major challenge: We are dealing with 21 million lines of code.
I think I just fail to understand the true complexity of a browser, but how is Firefox 21 million lines of code? How can a browser be 21 million lines of code? That just seems so large for what a browser does.
A browser lays out text and graphics (for any script in any locale), does GPU accelerated 3D rendering, handles network communication, plays video, plays audio, provides accessibility, parses a bunch of languages, compiles a couple of languages to machine code run in a security sandbox (and has a separate optimizing compiler), provides database storage, implements a plugin system, provides support for 25 years of legacy standards. And a bunch of more things.
It also reimplements all of those things itself from scratch because their priority is on cross-platform parity, not efficiency.
Every time I see someone working on a FF bug I’m cc’d on, I marvel at how many files they have to touch to get anything done. One key binding missing, because they’re not using the system text field? Hundreds of lines of diffs, spread across a dozen source files.
I think you’re underestimating what browsers do. They have an incredibly performant and highly-tuned VM, an extremely flexible and powerful layout engine, a comprehensive set of media players, and that’s not even talking about networking, security, and UI.
This seems to scream "separate me!". Why is it acceptable to put all this responsibility into one giant ugly ball? We desperately need separate layout engines, parsers, renderers, etc. Assemble-your-own-browser.
Because the entire advantage of the web from the perspective of the end user (as opposed to desktop applications) is that you can expect it to work the same everywhere. If everything were separated, it wouldn't make a difference, because Chrome and MobileSafari will continue shipping one integrated stack, even if they're forced to open up APIs for modularity.
What tangible advantage would this have? Having competing browsers seems possible, but since end-users wouldn't be choosing individual browser components anyway, it wouldn't actually improve anything.
It sounds like a good idea to be implemented by a technical user, and a solution looking for a problem to every other kind of user. I thought dependency hell was bad but this sounds like a further circle of such hell. I like that I can tell a user to just update their browser and OS and be confident that most users will handle the operation thereof. I know how to tell a non-technical user how to update their web browser in words they can understand. I’m going to have to write a new script for how to explain the scope of the limitations of using a web browser as a compiled application versus the proposed modular architecture just so they can tell me they don’t want or need that.
Choice matters. Sometimes you have to make a choice on behalf of your users. I think that’s what the industry as a whole has done here. Nothing is preventing people from rolling their own modular web browser. Maybe things like Beaker Browser and TOR Browser are versions of that same idea but approached from a specific use case.
Thanks for this comment. I think your idea is good but not for the same reasons that a web browser suite is a good idea for most non-technical users.
It doesn't seem that crazy to me. Calling them web browsers is kind of anachronistic at this point. Just the JS engine itself seems like it would be massively complex. It's a VM with a JIT compiler which I can only imagine is quite cumbersome to implement securely
It does effectively. With Gmail at least, there's an option to register Chrome as the mailto handler. Anytime you click on a mailto link anywhere on your computer, it will open Chrome to the compose window in Gmail. And Gmail works offline, so it's not really any different from having a traditional email client built into your browser.
This is something I really deplore about the state of modern software development. Taken collectively, it's mind-boggling the amount of waste there is in terms of memory and CPU (i.e. energy) usage by running a huge portion of consumer software in the browser. We could achieve the same result so much cheaper, with better performance and UX, by putting a serious effort into a set of standards for handling code from an arbitrary source securely.
HTML+ECMAScript keeps the developer at quite a distance from the hardware, and it imposes all kinds of limitations which don't have to exist.
The web is good enough for pure content delivery, but for more advanced interactive applications, the limitations are a huge waste. For instance, I have a computer at home with a 12-core, 24-thread processor, but web applications are basically single-threaded with some very limited support for web workers. I'm also limited to basically one language, which is garbage collected, so performance and memory usage are significantly worse than they could be. And as far as graphics, you're limited to WebGL which is ages behind the state of the art in terms of graphics APIs.
Basically there is a whole universe of tools out there for software development, but if you want to target web you're limited to a handful of poorly performing, hacky options.
For my work, we use several web applications to collaborate. As someone who works on highly optimized user-facing software, it's frustrating to understand how poor these web applications perform given the potential of modern hardware.
The reason we use them is because you can send anyone a link, and they're into the software in one click. There's no reason we couldn't have the same experience for native software, where when you click the link your computer downloads a native binary and runs it in a sandboxed environment against a standardized system API. It would take a lot of careful work and planning, but there's no technical reason it couldn't exist, and it would be so much better for developers and users.
But if it were to run on every platform and processor the binary would have to be written to some VM, which would slow it down a bit. That’s webassembly, which is coming up.
Also, there would have to be some GUI framework that works well across all devices, phone to desktop. That alone would be massive undertaking.
I agree there is no technical reason, but once you consider all limitations and let some time pass in the end you might up with something similar to what we have now. For example you might standardize on a current graphics API, but in a few years: guess what, you have an outdated API like webGL now.
> But if it were to run on every platform and processor the binary would have to be written to some VM
Not necessarily. You could have different binaries for different architectures. All you need is a consistent system interface with a well-specified ABI, and the client would just have to request the binary for their architecture.
> That’s webassembly, which is coming up.
Web assembly still has the limitations of running inside the browser. It's still single-threaded, and also the file size is pretty large compared to a binary.
> Also, there would have to be some GUI framework that works well across all devices, phone to desktop.
If you had a consistent system interface, you'd only have to do it once.
> For example you might standardize on a current graphics API
I mean Vulkan is basically this. And graphics API's are converging not diverging: modern graphics API's are all structured in a very similar way, since they're all just wrappers around the low level functionality of the GPU at this point.
I'm not saying it would be easy, but the reasons we don't have it have more to do with the fact that it would be difficult to get buy-in from OS vendors than it has to do with the technical problems involved. From a technical perspective, we could easily do way better than modern web.
JS is generally very fast because these monolithic VMs have had so much money poured into them that their optimisation process is unparalleled. V8 can sometimes get performance on par with C. "Overhead" is arguable, JS may be garbage in a lot of ways but modern JS is quite efficient.
sometimes is the key word there. It's amazing what kind of performance can be reached with JS given the monumental amount of effort which has gone into optimizing it, but there are structural reasons JS hsa a performance ceiling. JIT compilation can't change the fact that it's a highly dynamic, weakly typed language which is garbage collected. You might be able to find examples where a JIT compiled block of JS code performs comparably to a similar block of C code, but there are design patterns available in languages like C/C++ and Rust which aren't even expressible in JS which allow an extremely high level of optimization using careful memory management techniques.
Also web is basically a single-threaded runtime environment. For the foreseeable future, increased parallelism is the main way we can expect to get increased computing performance, and it's basically not available on the web.
The biggest thing you would probably be sacrificing is development time for every engineer involved. The html/css/js stacks have had unimaginable amounts of manpower invested, more so than any UI stack in history. This has made them much easier to learn and manipulate. Throwing away all of that would be a very hard sell.
I agree there would be an adoption problem for something new. But if there were a viable alternative, and people managed to build killer apps on it might be possible to draw adoption over time as people got to experience the superior performance.
This LOC count is going to be a huge problem in the future, and you'll see it as the bug count climbs. It'll be hard for people to understand how the system works and hard for them to ramp up. You'll see the slow decline if there's ever a point in which their funding is reduced and employees have to be laid off.
I kind of get what systemizer is saying. People may think of evolution of technologies as cycles but it is never that. A new technology 'Y' is always developed because the incumbent 'X' has some shortcomings. And even after a period of disillusionment when we revert back to 'X', it is not always the same. We synthesize the good points of 'Y' back to 'X'.
Coming to this topic, I see Microservices as a solution to the problem of Continuous Delivery which is necessary in some business models. I can't see those use cases reverting back to Monolith architecture. For such scenarios, the problems associated with Microservices are engineering challenges and not avoidable architecture choices.
It's true, but since vim is a bit old, copy/pasting can be a bit awkward due to having an internal clipboard with the buffer and not using the system's clipboard.
I wouldn't put that in a beginner tutorial (at least the system clipboard copying/pasting).
If I take a moment and think through that statement, I highly doubt you even believe this. And if it is sincere, then it just reinforces the point of the person that you were replying to.
Do you think logic is useless? What about ethics and politics? What counts as science? What counts as knowledge? These are all philosophical inquiries.
Even your statement that it's "useless" is a philosophical judgement. What makes something "useful" and not a waste of time? I guess you can make the argument that certain fields within philosophy are useless, but at this point, you're already doing philosophy again.
You can try and ignore philosophy, but you're not going to avoid doing it. At worse, you'll just be doing it poorly.