The comment is a generic vent about the project’s codebase and development approach, not an effort to engage in curious conversation about this vulnerability. Also, I consider it to be in breach of the guidelines about fulmination, swipes/sneers, and curmudgeonliness.
The comment doesn't even seem to contain opinion. It's simply objectively true. Let's be honest, you just didn't like the way it was directly calling out the author for writing shitty software. Responsibility is a thing and the author is displaying none of it.
I don’t know or care whether it’s “objectively true”. That style of commenting, i.e., “calling out the author” is not what HN is for, regardless of the truthfulness of the comment. You’ve been around long enough to know that. HN is for curious conversation between hackers, i.e., people who like to build things. Attacking people for building things in some kind of “wrong” way is not cool here. “Responsibility” is not mentioned in the guidelines but kindness is.
Isn't the development approach part of the reason that this exploit occurred? The creator openly admitted that they weren't properly reviewing code when describing the project previously. With no engineers who have domain knowledge of the app (because the developers are AI) that leaves a wide gap for exploits to appear.
I feel like just filtering this comment out is a mistake. I use AI, and I think there is a place for it, but if a colleague said "Here's a PR, I didn't even review it" I'd send it back and say "Well you better review it!"
How AI is used is 100% a topic for debate, ranging from "All AI is bad" to "there will be no coding, just vibes". You agree with this right? That there are a range of developers who believe different things all along this spectrum, and that for some developers un-reviewed code is the CAUSE of bad code.
The current OpenClaw GitHub repo [1] contains 2.1 million lines of code, according to cloc, with 1.6M being typescript. It also has almost 26K commits.
How is there such a wide gap between developers?
Either running ten agents pushing thousands of LOC a day or afraid to paste a code snippet from ChatGPT.
Cheaper. Every month or so I visit the models used and check whether they can be replaced by the cheapest and smallest model possible for the same task. Some people do fine tuning to achieve this too.
Context windows getting bigger doesn't make the economics go away. Tokens still cost money. 50K tokens of schemas at 1M context is the same dollar cost as 50K tokens at 200K context, you just have more room left over.
The pattern with every resource expansion is the same: usage scales to fill it. Bigger windows mean more integrations connected, not leaner ones. Progressive disclosure is cheaper at any window size.
It helps with cost, agreed. But caching doesn't fix the other two problems.
1) Models get worse at reasoning as context fills up, cached or not. right?
2) Usage expansion problem still holds. Cheaper context means teams connect more services, not fewer. You cache 50K tokens of schemas today, then it's 200K tomorrow because you can "afford" it now. The bloat scales with the budget...
Caching makes MCP more viable. It doesn't make loading 43 tool definitions for a task that uses two of them a good architecture.
Regardless of the intentions behind this, this is very, very illegal(if you are in the US) and could open the company up to some serious liability down the line.
It seems like the previous browser antitrust ruling in Europe was 2010, with the browser ballot screen being required until 2014.
Given the scale of the fine Microsoft received after breaking their agreement by breaking the ballot system and not showing it to users (over 500m EUR I believe), it seems strange to me that Microsoft is really so willing to get back into this area of making it harder to switch browser, given the ample past precedent.
Is user data gathered from the web browser under default settings really valuable enough to justify the risk?
Obviously €500m wasn't enough of a deterrent, and anti-trust authorities need to be able to issue exponentially increasing fines to produce either compliance from or the dissolution of the repeat offender.
The only question up for debate is what the base of the exponential function should be.
Honestly, I don't expect antitrust to trigger here. With Apple's treatment of browsers on their phones being considered perfectly acceptable somehow (and the same can be said about pretty much every type of app store category, to be honest), I don't think there are any government bodies that even care about this type of antitrust anymore.
The fight for free access to app stores has replaced the fight for browser bundling.
Too much focus on shipping features, not enough attention to stability and security.
As the code base grows exponentially, so does the security vulnerability surface.