Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
First do it, then do it right, then do it better (twitter.com/addyosmani)
205 points by erfanebrahimnia on Jan 2, 2024 | hide | past | favorite | 137 comments


On the other hand, in software-related matters, I have the increasing feeling that kludges and temporary decisions made in the "First do it" stage, tend to get carried on to infinity through the other phases.

Thus, lots of stuff that falls in the "Important, but Not Urgent" category of the Eisenhower Matrix end up never getting its proper development time. One could argue "well, then maybe they weren't actually that important, were they?" but I'd reply that usually the criteria to define something as important is measured with growth potential, and that's the wrong bar to use.

That's how we end up with "we'll build it in Electron for the time being and later will rebuild in proper native apps if the idea works" ends up being still Electron 10 years later. Or how "we'll make our own controls and later worry about accessibility" turns out never worrying about it.


Because more often than not good enough is good enough. Electron is a great example. Look at the market share of slack, discord, spotify, etc.

Optimizing as step one is a very good way to not ship on time and miss opportunities to competition.


Electron is only good enough because the companies themselves aren't paying the cost of it (resources and performance).

what gets considered important are things that hurt them rather than their users. In many ways it's an abuse of users.


Yes, yes, and a thousand times yes. People usually says "so what, RAM is there to be used" (I even think I saw it in this thread).

But when 3 of these bulky apps are eating the whole RAM and no more software can run because of them, it's the user, especially the not very savvy user, the one who usually assumes their computer doesn't work any more for modern software and they need to incur in the effort and cost of acquiring a new one.

Electron is a fine way to test a market, but it is only financially viable to use long term (such as done by Slack, Discord or Spotify) because it externalizes the actual cost to its users.

For an app that is open-use-close, it's ok and I see no issue. But for apps meant to be constantly running in the background (like a music player or a chat app) I think Electron is almost a sin against its users.


The companies do pay a cost — lost users and revenue due to the downsides.

But they’ve determined that this cost isn’t meaningful enough to write performant native platform specific versions.


Agreed, and as much as ppl hate it, this applies to a11y too. Ultimately all work that businesses do is only done if it’s a high enough priority for the business. If you have a massive customer base, and/or specific important customers who require strong a11y, then you prioritize it. But if you have a smaller customer base, with no important customers pushing for a11y, it’ll probably never get prioritized.

That obviously sucks for your users who really need it, but if there aren’t many of them, the site/app/whatever will just be hard to use for them, probably forever.



For an article that seems overly long and wordy that includes things like:

    On the initial use of an unfamiliar acronym within a document or a section, spell out
    the full term, and then put the acronym in parentheses.
... then goes on to lecture people about making things understandable... not even mentioning wtf "a11y" is supposed to mean is kind of taking the piss. :/


It looks like a deliberate decision to not explain the term until the very end ... a tongue-in-cheek demonstration of the problem being described. They eventually give the expansion towards the very end, but that too is obfuscated by putting it as an HTML abbr tag.


Yeah, I got about 3 paragraphs into it, then did a search for a11y, then just closed the page.


This is a tech specific community, I think it’s perfectly reasonable to use highly common tech abbreviations like a11y, TTL, K8s, DNS, SaaS, PaaS, FaaS, IaaS, i18n, DoS, DDoS, ISP, CRM, API, etc. We all know what they mean, and it saves typing.

In a non-tech specific community, I wouldn’t use these abbreviations, but I see no issue with them here.


I've been in tech for over a decade. I know all of your examples -- except a11y and i18n. Numeronym proliferation is a bit insane right now, and really needs to be stopped. :(


Hmm wild! I totally believe you, it’s just surprising to me. I’ve encountered these abbreviations constantly over the years, at multiple different tech companies, even though I work primarily as a backend engineer - they are FE concepts, are you by any chance a really heavily BE focused engineer, who doesn’t really touch frontends?


Have you seen o11y to abbreviate observability?


finally I can express my opinion by linking to someone else's

thank you

I hate "ally" thing so much >_<


+1


I don’t think that’s specific to software.

After all the phrase ‘there’s nothing so permanent as a temporary repair’ comes from construction.

I think the difference is rules and licensing were created around physical repairs and construction because of the human tendency to ‘good enough’ it.

And to be fair software that has the same safety importance as bridge sturcutural calculations does have a level of scrutiny around it commensurate with its importance, see NASA and their caution and flight / avionics software for aircraft etc.


I think the "good enough" mindset should always be challenged by a voice pushing for perfection, and solutions should try to land somewhere in between the two extremes.


But outside of the small percentage of technical people who know how inefficient e.g. Discord and Slack are, and how you could replace these applications that take 800MB of RAM each with something that took less than 50MB if they weren't using Electron... nobody cares. If people did care, you'd have someone writing a highly optimized chat/voip app from the ground up in Rust and native code for the mobile versions and people switching en masse. But it just hasn't happened, because for 99.999% of the world, they're good enough.


I don't know that this is accurate either though. Sure, there's as you say a small number of technical people that understand why an app taking 800MB+ of RAM is causing the sluggishness felt by 100% of users. There are other savvy enough people that know that having Slack and/or Discord open causes things to be less snappy. So even if they don't know why, they would also appreciate the rewritten app as much as the rest of us.


Of course if a snappier UI came "for free" nobody would be mad about it. But how many people would actually switch chat apps because of it? Discord and Slack are valued at $15b and $27b respectively; if you could get people to switch to a more efficient program written in native code, you'd become extremely wealthy.


WARNING: PERSONAL OPINIONS AHEAD!!!

If you're a company with a $27b valuation releasing an Electron app, then you're just a shite company. How do you have any self respect at that point. I get being a small team without subject experts in native apps and just need to get something going. If you are a $27b valuation small company we might continue the is Electron still the right choice conversation. I'd also be very curios what your app is doing that a small team can operate it and still be valued so highly. Otherwise, you're just a shite company making shite decisions and have no self respect.


If it makes you feel better to unilaterally condemn decision makers at big companies for not hewing to your priorities, then be my guest, it's a fun past-time.

However if you're interested in why this happens you need to acknowledge there are legitimate tradeoffs between going full native and Electron at any scale. Specifically, feature/UI parity and speed of development is higher with Electron. Deciding to retool your whole engineering org to optimize for highly polished native apps is actually a pretty risky decision once you get into this position at scale, regardless of how much engineering cred it generates HN and other highly technical/UX focused circles.


I'm in complete agreement here. These are businesses, and are not run by development teams. Sure, the developers could leave because they don't believe in the ethics of the business, but they they are going to get more Electron developers.

For most people, the software is "good enough." That doesn't mean that it's engineered in the best way possible. That just means that the business concentrates on other concerns (some simply about gaining more market share and profit).

I'm always in pursuit to write the best quality code that I can, with performance in mind. I also work for a small software company where I have that luxury, and also the drive to practice outside of my salaried hours (because I enjoy it, not because I feel I need to). I would feel terrible if my clients were complaining about slow performance, and I would find a solution that works within my constraints of having other client work to perform. Not having to support thousands or millions of users with my software makes it a lot easier to work on a rewrite than needing to worry about every operating system and configuration that is out there, along with new bug reports that need to be addressed.


This reads to me like "at a certain scale, it's too risky to develop an application that is actually good."


Yes of course, if your reading comprehension is low and you prefer to apply your own personal black and white definition about what constitutes “good”, then it makes perfect sense for you to put these words in my mouth and ignore what I actually said.


Well, since I'm such a simpleton, perhaps you can ELI5.

But rereading what you wrote, I do think that my paraphrase -- while certainly harsh -- is not inaccurate. You were pointing out, quite correctly, that choosing to use something like Electron is choosing a particular tradeoff. That tradeoff is to reduce development costs at the expense of product quality.

I'm not saying that's always an inappropriate tradeoff, but we should at least be clear about what the tradeoff is actually about.


The keyword you missed is "retool". Completely rewriting your clients at scale as is always a big risk, it's especially risky when you're rewriting from one to many, and you need a much broader set of skills to boot. This specifically refers to overhauling the technical approach (see: Second System Effect), it doesn't preclude that a large company "develop an application that is actually good", in fact every company I've founded or work in has full native clients.

On a more nuanced point, tradeoff is not strictly cost vs quality, it's also about velocity and consistency. The latter especially can have a huge impact on the perceived quality for common users, so the tradeoff is not as black and white as you want to make it.


Let's reverse the comparison....

How many companies have $27billion+ valuation and are not 'shite' software companies?

I'm just wondering about this, because almost all the big companies I know write shite software... in fact it seems has nothing to do with how 'shite' your application is, but if it has the features the users want.

Is there a reverse correlation? That if you're an investor you should shy away from companies writing 'non-shite' software because they aren't focusing on deliveries?


> How do you have any self respect at that point.

Because they have CFOs and other bean counters at that point, and there's just no good way to convince those people to approve spinning up new teams (or reskilling the existing one) to rewrite the application "properly" across several platforms.

It's like comparing "$$ for new feature development" (will move us forward) with "$$$$ for rewriting the application to be more efficient" (with uncertain benefits).


Electron is an established technology, offers acceptable performance for most users and doesn't - for the most part - require specific technical skills. You basically webdev with it and have a native looking app.

Those could - but don't have to be - valid arguments for using it while still retaining self respect, e.g. making an informed choice on what is best for the project one is working on.


> offers acceptable performance for most users

Most users accept it because they have no better choice. If an app is terribly slow but a lot of online communities and friends use it, what are they gonna do, just not interact with others? You want to play recent Nintendo games but don't want to regularly replace the controllers due to an unfixed issue that affects roughly 50 million people and has a perfectly working solution for over 20 years? Too bad. But I guess Nintendo calls it acceptable for most users because it's not quite bad enough that people stop using a product they paid for.


Vote with your wallet or conscience, and don't buy or use it. This null choice is always conveniently missing when people say there is no "better choice." The fact that people continue to do so means there is at least some value in the product as it is. Therefore, it is acceptable to most people, or otherwise they wouldn't continue to use it.


'Vote with your wallet' is like 'recycle if you want to save the planet'. It is a platitude. This is why we have regulations.


Yes, try regulating companies to not use Electron apps and see where that takes you. Regulation is generally reserved for egregious examples of corporate abuse, not using Electron or not. Now if there was a regulation for being able to choose client apps and have interoperability, like the EU is doing, then that's something I can get behind.


My comment was not about electron, it was about your platitude.


Yes, however in this case, my comment is about Electron, which is what we are discussing in this thread.


Then talk about electron and don't use thought-terminating cliches.


Sorry, the thread itself is about a particular topic and it is reasonable to assume that we are talking about that topic when making broader statements. Writing a drive-by comment about how it's a "thought-terminating cliché," which might be true when talking about a universal phrase but not for a particular case, is not very useful.


I guess you don't understand: saying 'if you don't like it, don't buy it' is a thought-terminating cliche. Just because people use something doesn't mean it is good, and not using it as an individual will not cause any change.


That's a fantastic way to deprive yourself of the things you like for no gain whatsoever. Voting with our wallets got us to where we are today.

People are already voting with their wallets and they're overwhelmingly voting for more exploitation, partially "helped" by the manipulation systematically employed in the industry that's already worked wonders for classic gambling. And Valve investing in Linux gaming had nothing to do with the size of the target audience - luckily for us.


>You basically webdev with it and have a native looking app.

That's a description of the problem.


It’s also a description of why native wasn’t chosen from the get go.

This thread is people who ship products vs people who ship engineering.


This thread is people who ship BS products, that the rest of the population is then stuck with using, and which hold the practice decades back (not to mention being bad for the environment).


Is app development only a 2 party system: native vs Electron? You seem to imply that it is. I disagree.


> How do you have any self respect at that point.

Usually looking at their last paycheck.

I don't personally agree with it, but, to some degree, this comment seems tone-deaf.

People keep paying them to be so-so.


While I agree with you in that we should have more native applications that are more conscious of how much resources they're using, making that transition from Electron to Native is probably not easy and riddled with pitfalls. Unless there's a standard path, how do you sell that to your superiors in the company in terms of opportunity cost?


Would they have gotten there if they started off native? Most definitely it would have taken longer to get to market. If they were to rewrite as a native app, then they would need to at least keep the same features their current user base are used to. Then you're going to start all over with new bugs being found.


MS just rewrote teams to move off of electron onto their own version of it (forget the name of it, it comes with windows nowadays).


They could just allow the iOS app to be used on macOS (Apple Silicon), I‘d happily use that. But for some reason, they don’t.


> But for some reason, they don't.

It's not that complicated. That's a new-ish capability and was not an option when the apps being discussed chose Electron (for their desktop incarnations). It's also not applicable to all macOS systems (as you already noted) out there given that it requires M1-M3 Apple devices. They'd still need to support the older macOS systems while also working out the issues on newer systems. Or they could maintain one solution that works on both.


Based on that their iOS app isn't just a shell for a web view?


when slack started off, were they a $27b valuation company? no, they were smaller and yes, needed something to get to MVP for getting there. you're saying that after reaching the point of receiving a $27b valuation that they can not afford to hire subject matter experts into delivering a much better tool than the bloated PoC released as a product? seriously?


One, a valuation is just that, a value of the company. Not the company holdings. Also, that doesn't address that a rewrite comes with stalled features (often), as well as new bugs introduced. How much of a business value is there in that rewrite? We're talking about a business. These are tools that are usually forced upon people by management, and if it works but is a little slow or taking large amounts of memory, management will either do nothing because they got a discount or upgrade machines. At the end of the day, for most it's "good enough".

To be clear, I'm not saying that a developer culture shouldn't strive for this, but these are large businesses and are out for that dollar, and neither seem to be run by developers that would call for a rewrite.


>and neither seem to be run by developers that would call for a rewrite.

No, but they seem to be run UI people. They can justify doing a rewrite of the UI, but not the core? Not one person that I interact with using Slack appreciated the UI changes


In my experience, which may not be common, UI people tend to be more social than backend developers. In reality, and this is not a judgment as to what is right and wrong, but who is more readily able to push for change? Someone that is deep into the software, or someone who is closer to management and able to communicate more effectively (as far as socializing and building a rapport?). Again, I'm not saying it is right, I'm just going by the reality of how software works. This is a good reason for independent software developers to exist! I agree that there is huge business value in writing correct software that is performant, and gets the job done properly. I agree with you that Slack's UI could be improved drastically. The problem is that it's driven by money rather than productivity. I'm pretty sure you and I both agree on how software SHOULD be, but that is not how software actually is, and I don't think it will be until it gets out of the grips of large software developers, or we start to write for performance in an attempt to make software that is environmentally performant.

I'm not sure if we ever get to the point where developers need to start worrying about performance for environmental purposes, that we'll survive if the performance impacts have been measured properly. I'm not trying to be depressing, but I just don't think that it would ever happen until it was too late. Maybe there would be a minority left of society to do this, but that would be building back society. Doomsday scenario. I don't think "we" are smart enough to deal with any catastrophic situation until it's too late.


Sure, they could, but if they're making money on a substandard implementation, then why would they bother making a good one?

This sort of thing is another example of our collective race to the bottom.


Discord's never had major performance problems for me, at least compared to anything else like Skype.


> nobody cares.

I don't agree.

Most people don't actually know any better, to be able to care. That includes not just normal users, but programmers, web admins, team leaders and managers too.

They never had to unload a TSR because there wasn't enough conventional DOS memory available. They never used Win3.10, where even the button light/shadow colors could be changed, yet fit into 1 MB RAM and 20 MB disk. They never browsed via 3KB/s dialup. Many never saw a website 1.0 without JS. Of them 99.999% never saw a 64K intro/demo. They don't know it's possible.

> If people did care, you'd have someone writing a highly optimized chat/voip app from the ground up

That does happen, rarely, and even then, it doesn't change the situation.

1) Very very few people are capable of writing apps. Out of them, very very few are capable of writing apps, optimized or not, that don't import the entire internet as a dependency. That's because that's how they were educated: "don't waste time rewriting code", "you'll repeat the mistakes", and other BS arguments that lead to nodejs and other monstrosities.

2) Those very few people are the same ones that actually wrote Discord, Slack, Teams. Most of them want to finish quickly and jump ASAP to the next big fancy project that caught their interest. So they import the entire internet as a dependency instead of writing again a 2-line function. These are the people that didn't care.

You won't see anymore people like Christian Ghisler who write and maintain some piece of software for their entire life.

3) Someone recently wrote a Windows terminal app faster, smaller, better than the official, just as a tech demo. Most people (99%) won't find out about it. I'm sure there are Teams alternatives. Nobody finds out about them because Google and M$ control what everyone sees.


I care a lot, but I'm not going to write a good version. Instead, I just don't use those apps (except at work where they force us to use Teams) -- that's a whole lot easier.


I care enough to violate TOS by bridging (some of) them to my Matrix server and chatting through a Matrix client instead. But absolutely not enough to (A) write one, and (B) attempt to acquire market share.


For companies selling hardware with software it does matter. Look at Apple. An iPhone has way less memory than an android and the profit margins are higher for Apple. Apps are running faster on lower hardware. I’d argue that this is a reason why people want iPhone. Good enough is often not good enough.


That's an US centric viewpoint, outside (e.g. Europe) Apple's iPhone matketshare is much, much lower. It's also a little bit like comparing gaming consoles to PCs. Yes, a stable platform with fixed hardware specs is much easier to optimize for than a hodgepodge of devices from various vendors (be that PCs or Android phones).


It’s extremely wasteful, people buying new phones because their calendar or slack app are sluggish on their 5 year old phone is insane to me. It’s so bad for the environment.


100%.

“Do it right” should always be the first goal. Then you make concessions to make sure it works in the flawed reality it needs to operate in.

A classic example from back in the day (maybe it’s still this way): don’t just design your webpages for IE. Write the markup and stylesheets as they ought to be, then amend as necessary to make it look correct for each browser’s quirks.


To interpret your example in another way, a page working in IE is doing it right. So first you do it, and structure it the way you think it should be done with "correct" markup. Once you have that, you can then do it right and get it working properly in IE. After that, doing it better would be restructuring things so maybe you dont need as many hacks.


Reality: Your employer doesn't pay you to write 'right' software, they want a deliverable that works in IE by end of day tomorrow and for the life of you, you can't figure out where half the elements are actually displaying.


Reality: it is far easier to iterate on software that’s clean and mostly correct than it is to do on software that is riddled with hacks, gotchas, footguns, and long-distance side effects.

It’s extremely depressing working with “senior” engineers who’ve spent an entire career with the above mentality, who have missed out on any chance at ever learning how to actually engineer software for reliability and maintainability. Their inability to do so reflects on a lack of practice rather than some sort of fundamental impossibility. Which sadly seems to be a widespread misconception these days.


Bezos talks about this a lot. Decisions can either be significantly important to get right and worth spending the time to explore fully all the options --- or not so much, but only if the cost of getting it wrong is cheap. The key is realizing this dichotomy exists, and then figuring out how to identify the problem in front of you as one or the other.


The Last Responsible Moment matters tremendously for the things that you 'have to get right' as well, and people who "need closure" can complect such negotiations.

If the problem is hard, you have to start thinking about it early. But every day you've thought about it a little without committing, you have a little more information about how it'll feel to take each path, and how it's likely to affect your operations people and customers.

Also Observation Bias may cause you to read that one article that explains why one of the choices seems like a good idea but is a trap. Or why you definitely should or should not use the latest major version, because of some change for the better or worse.

Frustratingly, some people see any discussions about such things as non sequiturs, distracting them from today's problems.


1-way and 2-way doors in the Amazon lexicon is useful.

Worth noting:

Most decisions are 2-way doors. Very few decisions are 1-way doors. It takes experienced folks to identify true 1-way doors and apply the brakes. Having a culture of thoughtful document reviews, for example, gives space for people to identify 1-way / 2-way doors and push back. Having institutionalized knowledge gained from 20 years of running the largest public cloud provider helps in identifying 1-way doors in software development too.

All of that is to caution... yes, Bezos says useful things, can you take it and apply it to your company? Maybe. Do you implement / have the rest of what made it work well for Amazon too?


>Bezos talks about this a lot. Decisions can either be significantly important to get right and worth spending the time to explore fully all the options --- or not so much, but only if the cost of getting it wrong is cheap. reply

So he basically says "It will either rain, or it will not rain".


No, I said different words than that. Maybe my edit clarified.


I know, I was mostly interested in capturing the "amounts to" of what he said, as opposed to the precise words.

That some decisions are more important to get right than others (that "the dichotomy exists"), is just a trivial observation, the kind that people like Bezos make to appear insightful "technical leaders", but which are ultimately empty, amounting to "some decisions are important, others are not". Gee, thanks, Jeff!

The interesting part is knowing which case is which, or advice for that - which the above leaves out.


I read it as "some decisions end up being more important than others, with no way to know beforehand, keep trying things to increase your luck surface".


> On the other hand, in software-related matters, I have the increasing feeling that kludges and temporary decisions made in the "First do it" stage, tend to get carried on to infinity through the other phases

This is true, unless you also follow the "throw away your first draft" process. If you do that, then you also throw away all of the duct tape and bubble gum you put in while figuring out what you're actually doing.


Always throwing away the first draft is such a useful strategy. Besides almost always resulting in a better finished product, it keeps you humble and light - life is an experiment!


In companies, software exists to serve business and customer needs, not the other way around. SWEs are typically shielded from the business side so many don't understand this and treat it like an art form, which it can be, but that is not its primary purpose. If Electron didn't work, people wouldn't use it. If it takes 2x as long to make native apps while your competitors use Electron, they will supplant you.


Yeah but if something stays in Electron 10 years later then either it’s not successful enough to warrant the cost of a rewrite or the payoff of the rewrite isn’t a good trade off.

In both cases if originally building in Electron was a substantial productivity boost then it sounds like it was the right choice.


I hate to break it to you, some things are good enough.


All three legs of the "first do it, then do it right, then do it better" stool are necessary parts of this philosophy / methodology. It's indeed a difficult and rarely-achieved practice. But personally, I think it's the better aspiration, even recognizing that it often gets stuck at the first step. I think the alternative of trying to start with "build it right" also usually falls into its own failure modes - analysis paralysis, crumbling under the weight of adding all the complexity all at once rather than iteratively, etc. - which in my view are even worse.


Another approach is to “do it right but small”, ie do an analysis of what is the smallest needed piece for it to work, the mvp, and then build it properly with good foundations and measurements in place.

I worked this way with a super strong business PM. Each iteration was small but we measured everything and made sure it had impact. Every feature was built properly, IE tested, refactored, typed etc. It made our pace more steady. If something didn’t have an impact we tried to understand why, incidents were reported and properly remedied. Still the most solid way I’ve worked.


My read is that most people try to skip step 2. So we either stay with the kludge, or we double down on it, making it even more expensive to remove than if we had just left it.

The real revelation for me though is the reversible decision, to the point that for really reversible decisions, I can be very averse to us spending any significant meeting time on it at all. There are 6 of us here, let's not waste man-hours on something we could as easily decide with a random number generator. Move on to something more delicate.


It's not just software. It's everything. It's even things like parenting -- per TFA you'd have to have three children to maybe get it right, but things are not that easy.


"we'll build it in Electron" isn't a kludge or temporary decision though. It is an acknowledgement that there aren't enough devs on payroll to support a true-native app. The tradeoff is RAM for developer time and hiring efficiency. That is a pretty good trade; RAM is there to be used.

The technical issue here is that the protocol is closed, not that the client is fat. So someone in a RAM-constrained environment can't choose to make different trade offs. But the protocol being closed, while annoying as a user, is definitely a strategic choice.


People who constantly bellyache about Electron need to compare the all-up cost of one web developer-- of which there is a massive supply in the labor market-- versus one experienced native dev per OS, who are rare and expensive and getting more so all the time as the desktop withers as a development platform. That cost difference absolutely can be the difference between economic viability and unviability, but HN constantly seems to think companies only pick Electron because they're cackling with glee at the thought of wasting their customer's CPU cycles or whatever.


On top of having different teams supporting each OS, because a company developing a project this large isn't going to have a small team where the developers know each OS (not quite full-stack in the web development stack, but just comparable to developers that know every OS they need to build for). There then needs to be communication between different development teams, as well as offering feature-parity across OS's, because your clients need to be able to communicate with each other across OS's since your product is a chat application for business. It's not normally acceptable to have features that are different between OS's when your product is meant for team chat.


Nothing is more permanent than a temporary solution.


If you don't have the time and agency to do-it-right, you won't.

It's tempting to believe that just by dropping the "discover what you need to do step" and you'll get enough time for it and make the thing a single change you can plug on your Jira. It's also delusional. A blatant lie people have been repeating to themselves for half a century, while fully knowing it's wrong for the entire time.


I've been doing things like this my whole life. It's very easy for anyone to criticize things. You know what's not that easy? Building something the people can criticize. Once you have something, even a proof of concept working, you can improve it; given more time and or resources.


There is a difference between criticism and discouragement. The first is incredibly valuable.

The problem is that it's very difficult to articulate that difference. This is a failure of language (and social norms), and it has to be recognized by both the speaker and the listener before it can be accommodated.


Only if you work at a company that values their developers, and sees the value in improving the software. Some companies will say "good enough" and concentrate on other things, like new features, rather than performance or security issues. I'm not saying that's right, just how businesses see development.


I agree. I call it “something to throw stones at”.


Failure is the first step of success. There was a thread few days ago about it.


I didn't mean failure per se. My case for instance: I've spent the last year building a house for myself. It was a lot of work. Sure it's easy for some people (my older brother) to come over and say things like "this part here could've been better". Well, of course it could have. But if i were to overthink every single piece of it, I'd still be living in my old apartment.


1. First make it possible

2. Then make it beautiful

3. Then make it fast

4. Rinse and repeat

Suffering-oriented programming (2012) — http://nathanmarz.com/blog/suffering-oriented-programming.ht...


Variant:

Listen to the customer so you CAN:

-- Make it fit for purpose.

-- Make it fit for use.

If the above fails,

-- Make it fit for marketing.

-- Make three envelopes.[1]

[1] _ https://news.ycombinator.com/item?id=38725206


Wow, haven’t seen this name mentioned in years. Apache Storm was a great tool at the time. Good memories.


> First do it

Sounds like your codebase will be full of shitty half-backed half-non-needed "features" with a lot bugs, legacy, and misdirection.

> then do it right

Sounds like your codebase will have a lot of hyped, now-dead, trends-of-the-moment frameworks while your team keeps arguing what's "right".

> then do it better

Sounds like your codebase will have a lot of rewrite in the new hyped, not-yet-dead, trends-of-the-moment frameworks while actual business logic will be ignored.

In my experience, this is the worst advice if we are talking about software engineering. I would advise the opposite: "Is this needed? really needed? nerd out on why it's needed."


Seriously, throwing shade at the 'launch fast, iterate fast' mantra is like saying the Wright brothers should have aimed for a Boeing 747 on their first go. The beauty of the MVP approach isn't in launching with a pile of 'shitty half-backed features.' It's about getting it in the hands of real users as quickly as possible so that you can start to iterate.


Considering the failure rate of startups (assuming we’re applying the advice in that context), you shouldn’t be worried about rewriting your codebase or using a hyped framework that will be outdated in 5 years because chances are (statistically) your startup won’t exist in 5 years.

The opposite advice has merit, but you can’t take it to the extreme. Better to build a MVP in 1 month than to spend 6 months doing user research and then another 6 months building a MVP with perfectly performant and optimized code


>> First do it

POC code doesn't belong as a permanent codebase.

>> then do it right

> Sounds like your codebase will have a lot of hyped, now-dead, trends-of-the-moment frameworks while your team keeps arguing what's "right".

Doing it right should include a framework. I'd rather use a framework that is dead in a month than write 10s of 1000s lines of boilerplate to make my own. ASP.NET and Spring come to mind as modern reliable frameworks. Even angular is reliable even if not popular.

>> then do it better

> Sounds like your codebase will have a lot of rewrite in the new hyped, now-yet-dead, trends-of-the-moment frameworks while actual business logic will be ignored.

Code should be considered disposable when it has outlived its usefulness. Even if that requires a rewrite. I've maintained classic asp websites and I have rewritten them in modern frameworks. Projects are not immutable. They are only as useful as they are until they are not. Then they die or get rewritten.


>I would advise the opposite: "Is this needed? really needed? nerd out on why it's needed."

In almost every company I've seen "Is this needed" is always followed up by some c-level saying "Of course it fucking is, we've sold the product for $X million to $Y company and now we need that feature working"

HN has a fair number of commenters that live in a dream world where they get to implement the features they choose, but for the vast majority of programmers it's going to be the features sales chose.


I love this saying, though I always heard it as a more SE-specific addage: Make it work, make it right, make it fast.


IMO Make it work, make it right, make it fast is a bit off.

Often the "fast" part is very intertwined with the "right/work" part. You can make it work/right and then realize you designed the core data structures completely wrong for performance. I know, I've made that mistake terribly before where I avoided performance until a year after a project was in flight, only to realize the fundamental API design actually made it impossible to fix. Had to basically start from scratch.

Of course it depends on if performance matters much, but it often does.


> I know, I've made that mistake terribly before where I avoided performance until a year after a project was in flight [emphasis added]

It's not about avoiding making it fast, it's about prioritizing. There's a probably apocryphal story from, I believe, The Psychology of Computer Programming (Weinberg) where a mainframe programmer opposed a new program because it wouldn't be as fast as theirs. Except their program produced garbage results, and the new one produced correct results.

The priority was in the wrong place (speed) rather than the business need (correctness).

There's absolutely nothing wrong with making your program fast, so long as it's not to the detriment of correctness or while avoiding making it correct. Fighting for performance boosts in your data access patterns or data layouts while still manipulating the data incorrectly is a fruitless endeavor.


Joe Armstrong, one of the inventors of Erlang, seems to agree on your last point. He said: "Make it work, then make it beautiful, then if you really, really have to, make it fast." I think "make it right" is often a moving target in software development (depending on how you're looking at it).


I see Make It Fast as a discrete step after Make It Right, because if it's made Right, then you can swap out the parts where optimizations need to be made. While you can't always know where slowness will occur, you do know that it will happen one day, and design the system so that swapping the "slow part" for a "fast part" is relatively trivial. That's the Make It Right part.

An well-designed API should behave so that you can fudge up whatever is happening behind the scenes, so long as the outputs remain the same based on the inputs. Design for consistency and idempotency. Implementation (read: behind the scenes) details are just that, details, and subject to change. If your implementation is tied to your interface, there are bigger problems and you skipped the Make It Right part.


Most of "make it fast" in modern software is fundamentally architectural in nature. If your architecture is not designed for performance or efficiency then no amount of module swapping will make it fast in any kind of absolute sense. And swapping architectures is tantamount to a rewrite.

Most performance has to be intentionally designed in from the beginning if it matters.


I will say, I don't think I've ever faced a performance issue that was caused by poor architecture. Maybe I've been exceedingly lucky. But pretty much all of the performance issues I've encountered that I can remember are in individual queries or functions / methods (or sometimes, a group of functions / methods); discrete units of code that could be tested, changed, and fixed without any sort of re-architecture or major rewrite. Or maybe we're using different definitions of "architecture" here.


My exact case was something like this: I've made a bunch of style libraries for web (my latest is Tamagui). The one before Tamagui was similar, it had variants and a `styled` helper, but it didn't output to "atomic CSS". A full year plus into the development of it it was working alright but was quite slow due to all the crazy CSS is was generating and inserting all the time. Atomic CSS really helps this in many ways.

So I dove in, technically I felt I could keep the API surface the same. But after about a full month of refactoring it to work with atomic CSS I found many problems. There are just some fundamental limitations to the API design you must enforce to make it work, and without such you really can't merge things properly. It's hard to explain without writing a mini-book, but needless to say the API surface very much can dictate the performance, and if you stuff your API with a bunch of features before making things fast, you may end up like me having to basically start from scratch.

My take: work/right/fast is a loop you must run many times. They also bleed into each other. Sometimes you do work/right and it feels fast, but you haven't deployed it at scale, so you never realize it's not fast. Keep your API as simple as you can, try and hit the fast part somewhat early before you add many features, and don't be afraid to take all your lessons and restart things. If fast is important to your lib, making it right must also be done in tandem with make it fast.


Yeah, make "make it work, make it right, make it fast" is only applicable if you get to throw out most of the work from the first two steps, which is rarely the case.


0. Don't tell your manager/customer that you first did it or there will be no do it right and do it better!


I used to have a poster on my wall at my office at Borland that said:

DO IT. DO IT RIGHT. DO IT RIGHT NOW.

I interpreted it as "start by writing a test or some code. make sure the code gives you correct results. only then, when the code generates correct results do you jump in and optimize," which is pretty close to the OP's message.


I've been fixing "this is just a prototype" code for most of my professional life. Dont do that. Write good code, dont write shitty code and promise youll fix it -- you wont.

What this ends up looking like is really:

- write shitty code with the excuse that you could do better, and youll fix it later

- fix it up later by tacking on dirty fixes and removing good code others added to fix specific problem (ignoring chestertons fence)

- then get tired and rewrite it entirely, throwing away all progress. Except for the parts where you just copy paste your old shit because everyone forgot how it works


See, I share your experience of spending most of my time improving things that aren't great, but I see it as a good thing. A full two thirds of "do it, do it right, do it better" is about making improvements to something that is already working and useful.

Sure, there is some inefficiency in re-doing already-working things to improve them, but in my view, less so than in spending more time creating a more perfect thing, but which is often the wrong solution or a solution to the wrong problem altogether.

If it were possible to perfectly know that you are creating the definite right solution to the definite right problem, then sure, go nuts! But in the real world this is essentially never the case, so it is better to first "do it", then if that "it" turns out to be useful, to "do it right", and then since that still inevitably won't actually be "right", to "do it better", and then to "do it better" again and again, until "it" is no longer useful.


I have always heard this when talking about approach to solving problems...

Once to understand the problem, once to understand the solution, once to do it properly.


My version of this:

1. Make it possible (even if it's ugly and expensive to do) - Blackberry

2. Make it pleasant/probable (aka improve the UX to customer maximum) - iPhone

3. Make it profitable/cheap (optimize efficiency/cost to deliver) - Android


We all agree step 1 lost the battle.

This implies that the business which did step 3 is more successful than the one that did step 2. It would be interesting to know what Apple and Google think of their businesses relative to each other.


Blackberry might not be around anymore but 24 years of operation and 85 million users (at peak) is nothing to scoff at. I would be glad to have founded such a „loser“


By market share, Android is over 80%. But Apple still makes 100% of the profit. I think I'm referring more to the diffusion of technology than the success of the company in the space, there's other examples.


I think it's hard to jump straight to "do it right" as step 1, because it's often not actually clear what "right" means until you actually have running code. Editing is easier than writing. Few essays, books, or software projects can skip having at least a first draft.

There are two problems that tend to keep much software in what is essentially the "draft" stage, though. One, articulated by many on this thread, is that business incentives really push to "if it's working well enough, it's working well enough." And as some have noted, that's not necessarily a problem! We're, most of us, writing software for pay here, not for fun or personal expression. But! As engineers, we have more insight into if it's really working "well enough" (assuming that we, as engineers, have also taken the time to make sure we understand the product we're building). Things like performance, security, and maintainability are part of making sure it's "good enough," but our non-technical partners and stakeholders can't necessarily see this, and part of our role is to help them see this so that they are making truly informed decisions on if it really is "good enough."

Second, and I feel bad saying this, there's often a dearth of widespread technical ability to actually take code from "ok" to "good." I don't mean this in an elitist way of saying some programmers are just better than others -- I do truly believe that with proper support and mentoring almost anyone who wants to become a better programmer can (see my user name). But I think that as an industry, we actually do a poor job helping our colleagues advance and deepen their skill. The ratio of skilled programmers to new is too small, and of skilled programmers with ability and inclination to level up their junior colleagues even smaller.

That's a decision by business leaders, who by and large have decided that junior-heavy, cheaper dev teams are "good enough."



Fred Brooks. The Mythical Man Month. 1975.

Chapter 11. Plan to Throw One Away.


"This I now perceived to be wrong, not because it is too radical, but because it is too simplistic. The biggest mistake in the 'Build one to throw away' concept is that it implicitly assumes the classical sequential or waterfall model of software construction." -- The Mythical Man-Month, 20th Anniversary Edition, pg. 265


For a book that was written in the 70s, MMM still has a lot of wisdom.



I just wrote a similar article called "Iterate relentlessly" which covers much of the same ground.

It's here if you're interested: https://www.ramijames.com/thoughts/iterate-relentlessly


Lots of variations on this. The one I heard first in my career:

1. Make it work

2. Make it work well

3. Make it look good


The criteria for an effective release tends to be more prickly when deployed to a million users and, to certain companies, a million users is a tiny audience. Know your audience.


I've seen it as this:

1) The first time, get it done the most expeditious way 2) The second time, do the way you wished you'd done it the first time 3) The third time, automate it...


I love that we all have our own version of this saying! Here is mine (my dad practically raised me with this haha)

Make it run, make it right, make it fast (if you have to).


A similar phrasing I've internalized is "Make it boring (stable, predictable) or make it better"


My teacher always said it like this. "Make it work. Make it right. Make it fast".


This has been one of my mantras since I started as a software developer.


This is my greatest weakness, and it has a name: executive dysfunction. Well, that's the descriptive one.. The technically correct - and therefore useful - name is ADHD.

Because I have lived so much of my life generally unable to "Just Start Somewhere", I've given the concept a lot of thought, and a lot of criticism. I can see my bias clearly today, but I'm still not convinced that my criticisms were ever wrong.

--

What's the value in doing something? Is it the doing, or the something?

If the value is in the doing, then there is no value in progress. That's obviously untrue.

If the value is in the something, then where is the doing to make that something in the first place? Clearly, some doing is necessary, too.

I find that the true value of "doing something" lies in the connection between the two. Their interdependence creates structure. By doing, we build something into something more. As a result of the expansion of something, we are better equipped for more doing.

--

But what if something could do itself? To me, this is the ultimate dream of software: the limitless potential of a something that does. The more time I have spent obsessing over this system, the more I have seen the world of software go in the other direction.

The problem lies in our goals. What is it that we desire to do? What is the something that will get us there? These are obvious questions with an obvious answer: application. An application is the kind of something that takes us directly to our desires. There's one problem with that system: every application must be constructed with the same process of doing. There is just enough of a platform for this to work, so everyone who is comfortable with "just doing the thing" can build an application, and be on their merry way.

The platform itself was made with this very category of goal in mind. Our foundations are constrained to a single design pattern: applications. This was not always the case.

--

Once upon a time, we had scripts. Technically, they are still here, but this platform is showing its age. Much like a human's ears, there are some features of the script platform that will never stop growing. New languages. New toolchains. New protocols. The more somethings we build onto script, the more unbalanced the overall platform gets. This is the problem that the application platform avoids.

But how? There are a few valuable weaknesses that application provides:

- Applications are weakly interconnected. If you don't need one, you can simply leave it behind. There is no need to cut it out, because it was never built in.

- Applications are redundant. An application has all of its doing built-in. This allows its UI/UX to be coherent, because it is free to define the doing in its own terms.

- Applications are independent. If an application depends on another application, that dependency is explicit, and can be easily managed. Dependencies can be swapped out with minimal work.

As valuable as each of these are, a weakness is a weakness. They haven't stopped the world of software from reaching incredible goals; but they have been enough to stop me.

--

Is this the end? Is application the final penultimate platform? I'm not convinced. I think we can do better. I think we have everything we need to create a new platform. This new platform can contain the strengths of both script and application. I have a mostly-coherent idea in mind: I call it the story empathizer.

Now I just need to start building it...


Url changed from https://nitter.cz/addyosmani/status/1739052802314539371, which points to this.

It's fine to post workaround URLs in the comments, but please don't make them top-level links.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: