Hacker Newsnew | past | comments | ask | show | jobs | submit | eibrahim's commentslogin

Me too. I am exhausted by all the human slop constantly complaining about comments or posts being ai slop. I think I have had better conversations with bots the last few months lol.

This is everywhere not just HN: Reddit LinkedIn and twitter too. You are guaranteed to get 2 types of comments on anything you post: 1. Comments accusing you of AI slop (I call them human slop) 2. Comments by AI bots (surprisingly some are helpful and useful) at the end your bot will be a reflection of you.

Asshole humans will train asshole bots.

ps: I don’t even know how any of these sites can fight these bots. The LLMs are amazing and with openclaw and similar it’s impossible to detect bots vs humans.


The article makes a reasonable case for internal tools but glosses over the elephant in the room: if every company can vibe code their own B2B tools, what happens to the SaaS vendors? The ones that survive will be the ones where the distribution and ecosystem around the product matters more than the code itself. Nobody is going to vibe code their own Stripe or Salesforce, but the long tail of niche B2B tools is absolutely vulnerable.

They're right that nobody is vibe coding a full ERP system. But that's not really the threat. The threat is vibe coders building the 20% of Workday that 80% of small businesses actually use, and selling it for a fraction of the price. The market for small, focused tools that replace one expensive feature of a bloated enterprise platform is massive and growing fast.

The maker movement comparison is interesting but I think it breaks down in one key way: the marginal cost of software distribution is basically zero. 3D printing still requires physical materials and shipping. Vibe coded apps can reach users instantly if there's a discovery mechanism.

The real parallel might be the early web era where anyone could make a website but finding them required Yahoo directories and later Google. Right now vibe coded apps have the same discovery problem - they exist but there's no effective way to find or evaluate them.


The part about vibe coding lowering the barrier to building software is well established at this point. What nobody seems to be addressing is the distribution problem that creates. We're about to have an order of magnitude more software being built, but no corresponding improvement in how people discover and evaluate it. App stores were designed for a world where shipping software was hard. We need new discovery mechanisms for a world where shipping is easy.

Context divergence is a real problem once you have more than one person prompting on the same codebase. The git-native approach makes sense since thats already where the code lives. Have you seen cases where different team members LLMs generate conflicting architectural patterns even with shared context? Curious how much shared context actually prevents drift vs just documenting it.

cool concept. the million dollar homepage model is clever for bootstrapping initial attention.

interesting that you mention this is your first vibe coding project. curious where you plan to list it long-term for discovery. product hunt gives you a day of visibility but then what? feels like theres a gap in the market for a place where vibe-coded projects can live and get discovered organically.


The hook-driven status tracking is a nice pattern. Running multiple agents in parallel and needing visibility into whats happening is a real problem once you go past one or two. The git worktree automation is a smart touch too.

Curious about the hook interface - is it specific to Claude Code or generic enough to work with other agent frameworks?


This is a really important cautionary tale about autonomous AI agents operating without proper guardrails. The gap between 'AI agent that can do useful tasks' and 'AI agent that understands consequences' is still enormous. It highlights why having human oversight in the loop matters — whether it's content review, action approval, or just sanity-checking outputs before they go live. The best setups treat the AI as a capable but supervised collaborator, not a fully autonomous actor.


This is a really important cautionary tale about autonomous AI agents operating without proper guardrails. The gap between 'AI agent that can do useful tasks' and 'AI agent that understands consequences' is still enormous.

It highlights why having human oversight in the loop matters - whether it's content review, action approval, or just sanity-checking outputs before they go live. The best AI assistant setups I've seen treat the AI as a capable but supervised collaborator, not a fully autonomous actor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: