AI won't kill apps, it will just change who 'clicks' the buttons. Even the most powerful AI needs a source of truth and a structured environment to pull data from. A world without websites is a world where AI has nothing to read and nowhere to execute. We aren’t deleting the UI. We’re just building the backends that feed the agents.
I want it yes. I already feel like Im the one doing the dumb work for the AI of manually clicking windows and typing in a command here or there it cant do.
Ive also been getting increasingly annoyed with how tedious it is to do the same repetitive actions for simple tasks.
Most books have so much nonsense details that I cant help but skip most of it.
On the other hand technical books can be so overwhelmingly difficult that you need to go outside and do hours of learning to understand one tidbit of it
depending on how large your codebase is, hopefully not. At this point use something like the IX plugin to ingest codebase and track context, rather than from the LLM itself.
- naiveTokens = 19.4M — what ix estimates it would have cost to answer your queries without graph intelligence (i.e., dumping full files/directories into context)
- actualTokens = 4.7M — what ix's targeted, graph-aware responses actually used
- tokensSaved = 14.7M — the difference
I mean whatever part of the code that is read by the AI has to be in the content window at some point or another nSprewd throughout your sessions Id think even with a huge codebase, 90% of it is going to be there
Ive been noticing something similar recently. If somethings not working out itll be like "Ok this isnt working out, lets just switch to doing this other thing instead you explicitly said not to do".
For example I wanted to get VNC working with PopOS Cosmic and itll be like ah its ok well just install sway and thatll work!
Experienced this -- was repeatedly directing CC to use Claude in Chrome extension to interact with a webpage and it was repeatedly invoking Playwright MCP instead.
I actually submitted an upstream patch for Cosmic-Comp thanks to Claude on Saturday. I wanted to play Guild Wars remake and something was going on with the mouse and moving the camera. We had it fixed in no time and now shit is working great.
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.
I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.
Its like when people say, if there are aliens they would find the same mathematical constants thet we do
I dont get this criticism at all, would you prefer someone write a shittier UI? And since when were people writing amazing bug free software before hand where not being vibe coded meant you could trust its good software?
I guess to be fair, beforehand no body would be attempting this kind of thing and releasing it unless they knew what they were doing
Both GP's and your example in effect mean "I'm fine with other people doing this, but I don't want to have anything to do with it, or at least be able to decide case-by-case."
Which is a valid stance IMO.
In the OP, a vibecoded UI when the whole project emphasizes "I did this myself, from scratch" is a bit awkward.
Does "I did this myself" mean they read all the relevant specs and then wrote the code - or did they just write the prompts themselves?
Edit: OP already answered and confirmed that they in fact did write the code themselves.
It's a tough pill for some HNers to swallow, but with a good process, you can vibe-code really good software, and software far more tested, edge-cased, and thoughtful than you would have come up with, especially for software that isn't that one hobby passion project that you love thinking about.
My process is just getting claude code to generate a plan file and then rinsing it through codex until it has no more advice left.
I'd consider it vibe-coding if you never read the code/plan.
For example, you could package this up in a bash alias `vibecode "my prompt"` instead of `claude -p "my prompt"` and it surely is still vibe-coding so long as you remain arms length from the plan/code itself.
I mean to be fair, if you are using agents more than likely you are not thinking about aspects of the code as deeply as you would have before.
If you write things yourself you spend far more time thinking about every little decision that you're making.
Even for tests, I always thought the real valuable part of it was that it forced you to think about all the different cases, and that just having bunch of green checkboxes if anything was luring developers into a false sense of security
There's definitely a trade-off, but it's a lopsided one that favors AI.
Before AI, you were often encumbered with the superficial aspects of a plan or implementation. So much that we often would start implementing first and then kinda feel it out as we go, saving advanced considerations and edge-cases for the future since we're not even sure what the impl will be.
That's useful for getting a visceral read on how a solution might feel in its fetal stage. But it takes a lot of time/energy/commitment to look into the future to think about edge cases, tests, potential requirement churn, alternative options, etc. and planning today around that.
With AI, agents are really good at running preformed ideas to their conclusion and then fortify it with edge-cases, tests, and trade-offs. Now your expertise is better spent deciding among trade-offs and deciding on what the surface area looks like.
Something that also just came to mind is that before AI, you would get married to a solution/abstraction because it would be too expensive to rewrite code/tests. But now, refactoring and updating tests is trivial. You aren't committed to a bad solution anymore. Or, your tests are kinda lame and brittle because they're vibe-coded (as opposed to not existing at all)? Ok, AI will change them for you.
I also think we accidentally put our foot on the scale in these comparisons. The pre-AI developer we'll imagine as a unicorn who always spends time getting into the weeds to suss out the ideal solution of every ticket with infinite time and energy and enthusiasm. The post-AI developer we'll imagine as someone who is incompetent. And we'll pit them against each other to say "See? There's a regression".
I think I agree. Fast iteration in many cases > long thought out ideas going the wrong direction. The issue is purely a mentality one where AI makes it really easy to push features fast without spending as much time thinking through them.
That said, iteration is much more difficult on established codebases, especially with production workflows where you need to be more than extra careful your migration is backwards compatible, doesn't mess up feature x,y,z,d across 5 different projects relying on some field or logical property.
Unless you go through the code with a tooth comb, you're not even aware of what trade-offs the AI has made for you.
We've all just seen the Claude Code source code. 4k class files. Weird try/catches. Weird trade-offs. Basic bugs people have been begging to fix left untouched.
Yes, there's a revolution happening. Yes, it makes you more productive.
But stop huffing the kool-aid and be realistic. If you think you're still deciding about the trade-offs, I can tell you with sincerity that you should go try and refactor some of the code you're producing and see what trade-offs the AI is ACTUALLY making.
Until you actually work with the code again, it's ridiculously easy to miss the trade-offs the AI is making while it's churning out it's code.
I know this because we've got some AI heavy users on our team who often just throwing the AI code straight into the repo with properly checking it. And worse, on a code review, it looks right, but then when something goes wrong, you go "why did they make that decision?". And then you notice there's a very AI looking comment next to the code. And it clicks.
They didn't make that decision, they didn't choose between the trade-offs, the AI did.
I've seen weird timezone decisions, sorting, insane error catching theatre, changing parts of the code it shouldn't have even looked at, let alone changed. In the FE sphere it's got no clue how to use UseEffect or UseMemoization, it litters every div with tons of unnecessary CSS, it can't split up code for shit, in the backend world it's insanely bad at following prior art on things like what's the primary key field, what's the usual sorting priority, how it's supposed to use existing user contexts, etc.
And the amount of times it uses archaic code, from versions of the language 5-10 years ago is really frustrating. At least with Typescript + C#. With C# if you see anything that doesn't use the simpler namespacing or doesn't use primary constructors it's a dead give-away that it was written with AI.
I feel this is the key - three years ago everyone on HN would be able to define "technical debt" and how it was bad and they hated it but had to live with it.
We've now build a machine capable of something that can't even be called "technical debt" anymore - perhaps "technical usury" or something, and we're all supposed to love it.
Most coders know that support and maintenance of code will far outlast and out weigh the effort required to build it.
Produce this "far more tested, edge-cased, and thoughtful" vibe-coded software for us to judge, please.
All I hear are empty promises of better software, and in the same breath the declaration that quality is overrated and time-to-ship is why vibecoding will eventually win. It's either one, or the other.
I’ve said it before here, but my mind was swayed after talking with a product manager about AI coding. He offhandedly commented that “he’s been vibe coding for years, just with people”. He wasn’t thinking much about it at the time, but it resonated with me.
To some agents are tools. To others they are employees.
I had a similar realisation in IT support - I regularly discover the answers I get from junior-to mid-level engineers need to be verified, are based on false assumption or are wildly wrong, so why am I being so critical of LLM responses. Hopefully some day they’ll make it to senior engineer levels of reasoning, but in the meantime they’re just as good as many on the teams I work with and so have their place.
its the same thing. no one can keep up with their plan mode/spec driven whatever process. All agent driven projects become vibe coded "this is not working" projects.
Lot of ppl are only in the beginning stages so they think its different because they came up with some fancy looking formal process to generate vibe.
reply