Hacker Newsnew | past | comments | ask | show | jobs | submit | PenguinCoder's commentslogin

Pretty sure dd is disk destroyer

I'm sure you have, but try be bringing that up to Epic, not introducing AI slop and Data gathering into HIPPA workflows.

Premature optimization. Not every single service needs or require 5 nines.


What does that mean, though?

If I'm storing data on a NAS, and I keep backups on a tape, a simple hardware failure that causes zero downtime on S3 might take what, hours to recover? Days?

If my database server dies and I need to boot a new one, how long will that take? If I'm on RDS, maybe five minutes. If it's bare metal and I need to install software and load my data into it, perhaps an hour or more.

Being able to recover from failure isn't a premature optimization. "The site is down and customers are angry" is an inevitability. If you can't handle failure modes in a timely manner, you aren't handling failure modes. That's not an optimization, that's table stakes.

It's not about five nines, it's about four nines or even three nines.


You're confusing backup with high availability.

Backups are point in time snapshots of data, often created daily and sometimes stored on tape.

It's primary usecase is giving admins the ability to e.g restore partial data via export and similar. It can theoretically also be used to restore after you had a full data loss, but that's beyond rare. Almost no company has had that issue.

This is generally not what's used in high availability contexts. Usually, companies have at least one replica DB which is in read only and only needs to be "activated" in case of crashes or other disasters.

With that setup you're already able to hit 5 nines, especially in the context of b2e companies that usually deduct scheduled downtimes via SLA


> With that setup you're already able to hit 5 nines

This is "five nines every year except that one year we had two freak hardware failures at the same time and the site was hard down for eighteen hours".

"Almost no company has this problem" well I must be one incredibly unlucky guy, because I've seen incidents of this shape at almost every company I've worked at.


I know one company that strove for five sixes.


you have to look at all the factors, a simple server in a simple datacenter can be very very stable. When we were all doing bare metal servers back in the day server uptimes measured in years wasn't that rare.


This is true. Also some things are just fine, in fact sometimes better (better performing at the scale they actually need and easier to maintain, deploy, and monitor), as a single monolith instead of a pile of microservices. But when comparing bare metal to cloud it would be nice for people to acknowledge what their solution doesn't give, even if the acknowledgement comes with the caveat “but we don't care about that anyway because <blah>”.

And it isn't just about 9s of uptime, it is all the admin that goes with DR if something more terrible then a network outage does happen, and other infrastructure conveniences. For instance: I sometimes balk at the performance we get out of AzureSQL given what we pay for it, and in my own time you are safe to bet I'll use something else on bare metal, but while DayJob are paying the hosting costs I love the platform dealing with managing backup regimes, that I can do copies or PiT restores for issue reproduction and such at the click of the button (plus a bit of a wait), that I can spin up a fresh DB & populate it without worrying overly about space issues, etc.

I'm a big fan of managing your own bare metal. I just find a lot of other fans of bare metal to be more than a bit disingenuous when extolling its virtues, including cost-effectiveness.


It's true, but I'm woken up more frequently if there are fewer 9s, which is unpleasant. It's worth the extra cost to me.


Hence you can use AWS to host them.


and each additional nine increases complexity geometrically.


All of those are examples of overbloated, slow, horrible user experience apps.


Does their market share back up your take of them as horrible apps?

Are there QT or GTK competitors crushing them?

I always hear how terrible electron apps are, but the companies picking electron seem to get traction QT or other apps don't and seem to have a good cross platform story as well.


Users will happily deal with a suboptimal experience as long as there are other things attracting them to the product. That's why Microsoft can do whatever it wants with Windows without worrying their users will run off somewhere else. So if you care more about people than businesses, maybe it shouldn't be an excuse to pick "better dev experience" over the user's.


Beware with that logic. You notice successful electron apps because of how bloated they are. I suspect you use many Qt apps without even noticing.

One that comes to mind that I use daily and noticed only recently that it was implemented in Qt is the telegram desktop app.


They said horrible user experience apps, not horrible apps. You can still deliver an app with a horrible user experience and build a profitable business. Ever done an expense report?

Companies aren't picking Electron due to inherent shortcomings in other platforms, they're picking it because it's easier (and cheaper) to find JavaScript devs who can get up to speed with it quickly.


Discord, VS Code, and Figma are all apps that individuals choose and are well liked despite many alternatives. Slack too I think, though I don’t have experience with it.

Your comment applies to Teams and I’m sure other electron apps. But the sweeping generalization that electron apps have terrible user experiences is pretty obviously incorrect.


They work great for me.


Oh yes, the great old "works for me". On a yesterday's supercomputer, I presume? I live in a "developing" (have doubts it's really developing) country, most people are running laptops with no more than 8 GiB of RAM (sometimes it's 4 or less), and all this Electron nonsense is running like molasses, especially if you're trying to use a computer like a proper computer and do multitasking.

And most of the world is like that, very few of us (speaking globally) have $2k to drop on a new supercomputer every few years to run our chat applications.


Hey, I found CEO of Discord


Chicken liver has more iron and selenium in it per Oz than beef liver. Easier to eat a ton and not as harsh tasting. Make some dirty rice or just liver stew!


I prefer to turn that into patê personally. Always the goal is getting people to actually eat the stuff


There are quite a few reasons that should happen, but I won't hold my breath. And I that issuance really won't do anything worthwhile, except be a footnote in a history book.


I don’t think there is any way in hell the US is going to be welcome back on the world stage as a partner to any of their former allies again unless among many other things they put themselves under the ICCs jurisdiction.


Unfortunately that had been forgotten in this era.


I'm proudly 100% on prem Linux sys admin. There are not openings for my skills and they do not pay as well as whatever cloud hotness is "needed".


Nobody is hiring generalists nowadays.

At the same time, the incredible complexity of the software infrastructure is making specialists more and more useless. To the point that almost every successful specialist out there is just some disguised generalist that decided to focus their presentation in a single area.


Maybe everyone is retaining generalists. I keep being given retention bonuses every year, without asking for a single one so far.

As mentioned below, never labeled "full stack", never plan on it. "Generalist" is what my actual title became back in the mid 2000s. My career has been all over the place... the key is being stubborn when confronted with challenges and being able to scale up (mentally and sometimes physically) to meet the needs, when needed. And chill out when it's not.


> Nobody is hiring generalists nowadays.

What?

I throw up in my mouth every time I see "full stack" in a job listing.

We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end. Full Stack is the "webmaster" of the modern era. It might mean front and back end, it might mean sysadmin and DBA as well.


Even full stack listings come with a list of technologies that the candidate must have deep knowledge of.

> We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end.

On a first approximation, those roles were all wrong. If your people don't wear many of those hats at the same time, they won't be able to create software.

But yeah, we did get rid of roles. And still require people to be specialized to the point it's close to impossible to match the requirements of a random job.


That's the crazy thing.

Most AWS-only Ops engineers I know are making bank and in high demand, and Ops teams are always HUGE in terms of headcount outside of startups.

The "AWS is cheaper" thing is the biggest grift in our industry.


I think this is driven by the market itself and the way cloud promotes their product.

After fully in cloud for sometimes, we’re moving to hybrid solutions. The upper management happy with costs and the cloud engineer had new toy's


1. large, homogenous domain where the budget for your department is large

2. niche, bespoke domain primarily occupied by companies looking to cut costs


I wonder how vibe coding will impact this.

You can easily get your service up by asking claude code or whatever to just do it

It produces aws yaml that’s better than many devops people I’ve worked with. In other words, it absolutely should not be trusted with trivial tasks, but you could easily blow $100K’s per year for worse.


I've been contemplating this a lot lately, as I just did code review on a system that was moving all the AWS infrastructure into CDK, and it was very clear the person doing it was using an LLM which created a really complicated, over engineered solution to everything. I basically rewrote the entire thing (still pairing with Claude), and it's now much simpler and easier to follow.

So I think for developers that have deep experience with systems LLMs are great -- I did a huge migration in a few weeks that probably would have taken many months or even half a year before. But I worry that people that don't really know what's going on will end up with a horrible mess of infra code.


To me it's clear that most Ops engineers are vibe coding their scripts/yamls today.

The time difference between having a script ready has decreased dramatically in the last 3 years. The amount of problems when deploying the first time has also increased in the same period.

The difference between the ones who actually know what they're doing and the ones who don't is whether they will refactor and test.


Most businesses really don't need that complexity. They think they do. Premature optimization.


If your database has a hardware failure then you could loose all sales and customer data since your last backup, plus cost of the down time while you restore. I struggle to think of a business where that is acceptable.


Why are you ignoring the huge middle ground between "HA with fully automated failover" and "no replication at all"?

Basic async logical replication in MySQL/MariaDB is extremely easy to set up, literally just a few commands to type.

Ditto for doing failover manually the rare times it is needed. Sure, you'll have a few minutes of downtime until a human can respond to the "db is down" alert and initiates failover, but that's tolerable for many small to medium sized businesses with relatively small databases.

That approach was extremely common ~10-15 years ago, and online businesses didn't have much worse availability than they do today.


I've done quite a few MySQL setups with replication. I would not call setup "extremely easy", but then, I'm not a full time DB admin. MySQL upgrades and general trouble shooting is so much more painful than AWS aurora where everything just takes a few clicks. And things like blue/green deployment, where you replicate your entire setup to try out a DB upgrade, are really hard to do onprem.


Without specifics it's hard to respond. But speaking as a software engineer who has been using MySQL for 22 years and learned administrative tasks as-needed over the years, personally I can't relate to anything you are saying here! What part of async replication setup did you find painful? How does Aurora help with troubleshooting? Why use blue/green for upgrade testing when there are much simpler and less expensive approaches using open source tools?


When I worked at AWS, the majority of customers who thought they had database backups had not tested recovery. The majority of them could not recover. At that point, RDS sells itself.

The other huge middle ground here is developer competency and meticulousness.

People radically overestimate how competent the average company writing software is.


Putting aside the fact that replication and backups are separate operational topics -- even if a company has no competent backend engineers, there are plenty of good database consultancies that can help with this sort of thing, as a one-time cost, which ends up being cheaper than the ongoing markup of a managed cloud database product.

There's also a big difference between incompetent and inexperienced. Operational incidents are how your team gains experience!

Leaning on managed cloud services can definitely make sense when you're a small startup, but as a company matures and grows, it becomes a crutch -- and an expensive one at that.


My "Homeserver" with its database running on an old laptop has less downtime than AWS.

I expect most, if not 99%, of all businesses can cope with a hardware failure and the associated downtime while restoring to a different server, judging from the impact of the recent AWS outage and the collective shrug in response. With a proper raid setup, data loss should be quite rare, if more is required a primary + secondary setup with a manual failover isn't hard.


That's not the same as a "high availibility hot swap redundant multi region database".

Running mysqldump to a usb disk in the office once a day is pretty cheap.


Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: