More likely that their core database hit some scaling limit and fell over. Their status page talks constantly about them working with their "upstream database provider" (presumably AWS) to find a fix.
My guess. They use AWS hosted Postgresql and autovacuuming fell permanently behind without them noticing, and can't keep up with organic growth, and they can't scale vertically because they already maxed that out before. So they have to do crash migrations of data off their core DB which is why it's taking so long.
Feature factories falling apart always remind me of the stories doctors tell about patients showing up with symptoms that the doctor could have done something about two years ago. Like diabetes so far progressed that amputation is now on the list of possibilities.
Nobody asks for help when the help can still be productive. It's always the death bed conversion.
An outage of this magnitude is almost ALWAYS the direct and immediate fault of senior leaderships priorities and focus. Pushing too hard in some areas, not listening to engineers on needed maintenance tasks etc.
And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.
Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.
Think of airplane safety. I think it is similar. A good culture can make sure $root-cause is more likely detected, tested, isolated, monitored, easy to roll back and so on.
Hugops to the people working on this for the last 31+ hours.
Running incidents of this significance is hard, draining and requires a lot of effort, this going on for so long must be very difficult for all involved.
Prediction: Someone confidently broke something, then confidently 'fixed' it, with the consequence of breaking more things instead. And now they have either been pulled off of the cleanup work or they wish they had been.
Wow >31h I am surprised they couldnt rebuild their entire systems in parallel on new infra in that time. Can be hard if data loss is invokved tho (a guess). Would love to see the post mortem so we all can learn.
I doubt it’s infra failure but software failure. Their bad design has caught up and they can’t toss more hardware for some reason. Most companies have this https://xkcd.com/2347/ in their stack and it’s fallen over.
I agree the original setup didn’t allow for much else in the way of references, I just wanted to shoehorn in a perhaps more apropos time travel movie reference.
If you like Primer you might like this Australian time travel movie also, but I don’t think it references 88 MPH either. It’s a kind of Eternal Sunshine vibe and also involves weird tech.
Lots get starry-eyed and aim for five nines right out of the gate where they should have been targeting nine fives and learning from that. Walk before you run.
> Change controls are tighter, and we’re investing in long-term performance improvements, especially in the CMS.
This reads as if overall performance was an afterthought and this doesn’t seem practical; it should be a business metric, it is important to the users after all.
Then again, it’s easy to comment like this in hindsight. We’ll see what happens long term.
Hugs to the ones dealing with this and the users of Webflow who invested in them for their clientele. Hoping they'll release a full postmortem once the sky clears up.
The problem is they get good in that specific disaster. They can only plug a hole in the dike after the hole exists, then they look at the hole and make a plug the exact shape of that hole. The next hole starts the process over for it specifically. Each time. There's no generic plug that can be used each time. So sure, the get very good at making specific plugs. They never get to the point of making a better dike that doesn't spring so many leaks.
okay. and? the CTO isn't the last word in anything. if they are overruled to keep releasing new features, acquiring new users/clients, sales forward dev cycles, then the whole thing has potential to collapse under the weight of itself.
It's actually the job of the CEO to keep all of the c-suite people doing jobs. Doesn't seem to stop the CEO salary explosions.
Companies, after a disaster, focus lots of effort on that particular disaster, leaving all the other potential disasters unplanned for.
If you work at Webflow, you can anticipate LOTS of work in disaster recovery in the next 12 months. This has magically become a high priority for the CEO, who previously wanted features more than disaster recovery planning.
They will wait to focus massive resources on their security until after they get hacked.
Because imagine your local biz can either pay a designer 1k a year or DIY and pay godaddy 200 bucks. Or 30 bucks for Wordpress and 20 hours of fiddling and asking their cousin for help.
Its not great by our standards but I bet many of us drink the house wine not something more sophisticated, right :)
Why? Genuinely asking. Did you mean because there are free alternatives to self-host? I don't think that it would be so easy for someone in the market for a WYSIWYG blog builder to set everything up themselves.
Exactly. Because of the abundance of the one-click deploy WordPress offerings from value providers like OVH / Hetzner I would think margins are very low for WYSIWYG site builders.
I have no clue of "webflow" purpose based on it's marketing/buzzword filled landing page, but seems it's just a "no code" abstraction on top of HTML/CSS?
yet another SaaS that really does not need to be online 24/7. It could have been a simple app where you could "no code" on local machine and async state with webflow servers.
It's painful to use, but lets non-technical clients edit copy and create content in a safe environment. There's a runtime CMS types creator and a WYSIWYG html editor with facility for code blocks from global to inline scopes. Also comes with batteries included deploy. It's basically a one or two levels higher Squarespace/Wix
if you have a web based SaaS, everyone gets the updates. if you have a "simple app", then you are dependent on all of the users being up to date which you just cannot guarantee. also, what is a "simple app" that does not care about differences among various OSes found in the wild? how large of a team do you need for each of those OSes to support as wide of a user base as a web only app?
the customer can self-determine just fine using a web based SaaS no-code website builder. it's not like this is a different type of app. the thing is making a website that is more also hosted by the maker of the app. if you want to make a website to host on your own servers, then you are not the target audience of the web app.
you're like the person complaining that the hammer isn't very useful for driving in the screw. you need a different tool/app if you want to make a site you host yourself
Claude, here is the bug, fix it. This is the new log output, fix the error. Fix the bug. Try a different approach. Reimplement the tests you modified. The bug is still happening, fix it. Fix the error.
We're out of credits, create a new account. We've been API rate limited? When did that start happening? When are we going to get access again?
More like "Good luck users of the future" that have to wade through failing infrastructure and tools that were vibe coded to begin with, rate limits notwithstanding.
My guess is reason they been down so long is they don’t have good rollback so they attempting to fix forward with limited success.