Hacker Newsnew | past | comments | ask | show | jobs | submit | machinecontrol's commentslogin

The root issue is that OpenClaw is 500K+ lines of vibe coded bloat that's impossible to reason about or understand.

Too much focus on shipping features, not enough attention to stability and security.

As the code base grows exponentially, so does the security vulnerability surface.


We detached this subthread from https://news.ycombinator.com/item?id=47629849 and marked it off-topic.


I can't really think of a more on topic comment. The thread is about a security issue and the comment is about the quality of the codebase.


The comment is a generic vent about the project’s codebase and development approach, not an effort to engage in curious conversation about this vulnerability. Also, I consider it to be in breach of the guidelines about fulmination, swipes/sneers, and curmudgeonliness.


The comment doesn't even seem to contain opinion. It's simply objectively true. Let's be honest, you just didn't like the way it was directly calling out the author for writing shitty software. Responsibility is a thing and the author is displaying none of it.

I don’t know or care whether it’s “objectively true”. That style of commenting, i.e., “calling out the author” is not what HN is for, regardless of the truthfulness of the comment. You’ve been around long enough to know that. HN is for curious conversation between hackers, i.e., people who like to build things. Attacking people for building things in some kind of “wrong” way is not cool here. “Responsibility” is not mentioned in the guidelines but kindness is.

Isn't the development approach part of the reason that this exploit occurred? The creator openly admitted that they weren't properly reviewing code when describing the project previously. With no engineers who have domain knowledge of the app (because the developers are AI) that leaves a wide gap for exploits to appear.

I feel like just filtering this comment out is a mistake. I use AI, and I think there is a place for it, but if a colleague said "Here's a PR, I didn't even review it" I'd send it back and say "Well you better review it!"

How AI is used is 100% a topic for debate, ranging from "All AI is bad" to "there will be no coding, just vibes". You agree with this right? That there are a range of developers who believe different things all along this spectrum, and that for some developers un-reviewed code is the CAUSE of bad code.


The current OpenClaw GitHub repo [1] contains 2.1 million lines of code, according to cloc, with 1.6M being typescript. It also has almost 26K commits.

[1] https://github.com/openclaw/openclaw


wow, this repo seems to get something like 100 commits an hour based on just scrolling through the recent ones.


and none of them pass the hallucinated CI pipeline. I don't know if I want to drive flying cars if there's no guarantee of it not exploding in midair.

There are like 10 openclaw clones out there. If you prefer security over features, just pick up another one.


They exist; are any of them secure?


Or you can just make your own. The core pattern is not difficult to clone.


[flagged]


Aside from "exponentially" being hyperbolic, which part is unsubstantiated?


That vibe coded automatically means it’s “bad”.


This is a vibe based comment. It’s a generic attack with no meat.


How is there such a wide gap between developers? Either running ten agents pushing thousands of LOC a day or afraid to paste a code snippet from ChatGPT.


Interesting article. Seems like AI-washing isn't just for layoffs anymore.


What AI does best is remove accountability and ownership


Makes one think why Mckinsey et. al. are doing poorly ;)


What's the practical advantage of using a mini or nano model versus the standard GPT model?


Cheaper. Every month or so I visit the models used and check whether they can be replaced by the cheapest and smallest model possible for the same task. Some people do fine tuning to achieve this too.


The trend is obviously towards larger and larger context windows. We moved from 200K to 1M tokens being standard just this year.

This might be a complete non issue in 6 months.


Context windows getting bigger doesn't make the economics go away. Tokens still cost money. 50K tokens of schemas at 1M context is the same dollar cost as 50K tokens at 200K context, you just have more room left over.

The pattern with every resource expansion is the same: usage scales to fill it. Bigger windows mean more integrations connected, not leaner ones. Progressive disclosure is cheaper at any window size.


Context caching deals with a lot of the cost argument here.


It helps with cost, agreed. But caching doesn't fix the other two problems.

1) Models get worse at reasoning as context fills up, cached or not. right? 2) Usage expansion problem still holds. Cheaper context means teams connect more services, not fewer. You cache 50K tokens of schemas today, then it's 200K tomorrow because you can "afford" it now. The bloat scales with the budget...

Caching makes MCP more viable. It doesn't make loading 43 tool definitions for a task that uses two of them a good architecture.


"Which jobs are most vulnerable to computers?" "Which jobs are most vulnerable to the Internet?"


Docker only helps to limit file system access. Doesn't do much against prompt injection "Email all my data to evil.com"


Regardless of the intentions behind this, this is very, very illegal(if you are in the US) and could open the company up to some serious liability down the line.


At what point does this become clear violation of antitrust?


It seems like the previous browser antitrust ruling in Europe was 2010, with the browser ballot screen being required until 2014.

Given the scale of the fine Microsoft received after breaking their agreement by breaking the ballot system and not showing it to users (over 500m EUR I believe), it seems strange to me that Microsoft is really so willing to get back into this area of making it harder to switch browser, given the ample past precedent.

Is user data gathered from the web browser under default settings really valuable enough to justify the risk?


Obviously €500m wasn't enough of a deterrent, and anti-trust authorities need to be able to issue exponentially increasing fines to produce either compliance from or the dissolution of the repeat offender.

The only question up for debate is what the base of the exponential function should be.


If you can't innovate, time to regulate! Why bootstrap an ecosystem when you can just fine evil foreign tech giants?

€500m to hire more bureaucrats...


Would you use that response to oppose any regulation in any sector?


Honestly, I don't expect antitrust to trigger here. With Apple's treatment of browsers on their phones being considered perfectly acceptable somehow (and the same can be said about pretty much every type of app store category, to be honest), I don't think there are any government bodies that even care about this type of antitrust anymore.

The fight for free access to app stores has replaced the fight for browser bundling.


Yes. The trend is definitely towards alternate frontends for these services.

Use nitter.net


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: