Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanisnan's commentslogin

The amount of negativity in the original post was astounding.

People were making all sorts of statements like: - “I cloned it and there were loads of compiler warnings” - “the commit build success rate was a joke” - “it used 3rd party libs” - “it is AI slop”

What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

If you are hung up on commit build quality, or code quality, you are completely missing the point, and I fear for your job prospects. These things will get better; they will get safer as the workflows get tuned; they will scale well beyond any of us.

Don’t look at where the tech is. Look where it’s going.


As mentioned elsewhere (I'm the author of this blogpost), I'm a heavy LLM user myself, use it everyday as a tool, get lots of benefits from it. It's not a "hit post" on using LLM tools for development, it's a post about Cursor making grand claims without being able to back them up.

No one is hung up on the quality, but there is a ground fact if something "compiles" or "doesnt". No one is gonna claim a software project was successful if the end artifact doesn't compile.


I think for the point of the article, it appeared to, at some point, render homepages for select well known sites. I certainly did not expect this to be a serious browser, with any reliability or legs. I don’t think that is dishonest.

> I certainly did not expect this to be a serious browser, with any reliability or legs.

Me neither, and I note so twice in the submission article. But I also didn't expect a project that for the last 100+ commits couldn't reliably be built and therefore tested and tried out.


My apologies - my point(s) were more about the original submission for the Cursor blog post, not your post itself.

I did read your post, and agree with what you're saying. It would be great if they pushed the agents to favour reliability or reproducibility, instead of just marching forwards.


> What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

Correct, but Gas Town [1] already happened and what's more _actually worked_, so this experiment is both useless (because it doesn't demonstrate working software) _and_ derivative (because we've already seen that you can set up a project where with spend similar to the spend of a single developer you can churn out more code than any human could read in a week).

[1]: https://github.com/steveyegge/gastown


> What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

The reason I have yet to publish a book is not because I can't write words. I got to 120k words or so, but they never felt like the right words.

Nobody's giving me (nor should they give me) a participation trophy for writing 120k words that don't form a satisfying novel.

Same's true here. We all know that LLMs can write a huge quantity of code. Thing is, so does:

  yes 'printf("Hello World!");'
The hard part, the entire reason to either be afraid for our careers or thrilled we can switch to something more productive than being code monkeys for yet-another-CRUD-app (depending on how we feel), that's the specific test that this experiment failed at.

Spending 24h/day to build nothing isn't impressive - it's really, really bad. That's worse than spending 8h/day to build nothing.

If the piece of shit can't even compile, it's equivalent to 0 lines of code.

> Don’t look at where the tech is. Look where it’s going.

Given that the people making the tech seem incapable of not lying, that doesn't give me hope for where it's going!

Look, I think AI and LLMs in particular are important. But the people actively developing them do not give me any confidence. And, neither do comments like these. If I wanted to believe that all of this is in vain, I would just talk to people like you.


>If you are hung up on commit build quality

I'm sorry but what? Are you really trying to argue that it doesn't matter that nothing works, that all it produced is garbage and that what is really important is that it made that garbage really quickly without human oversight?

That's.....that's not success.


Quality absolutely matters, but it's hyper context dependent.

Not everything needs to, or should have the same quality standards applied to them. For the purposes of the Cursor post, it doesn't bother me that most of the commits produced failed builds. I assume, from their post, that at some points, it was capable of building, and rendering the pages shown in the video on the post. That alone, is the thing that I think is interesting.

Would I use this browser? Absolutely not. Do I trust the code? Not a chance in hell. Is that the point? No.


"Quality" here isn't if A is better than B. It's "Does this thing actually work at all?"

Sure, I don't care too much if the restaurant serves me food with silverware that is 18/10 vs 18/0 stainless steel, but I absolutely do care if I order a pizza and they just dump a load of gravel onto my plate and tell me it's good enough, and after all, quality isn't the point.


Software that won’t compile and doesn’t do anything is not software, it’s just a collection of text files. A computer that won’t boot isn’t a computer anymore, it’s a paperweight. A car that won’t start isn’t a car anymore, it’s scrap metal.

I can bang on a keyboard for a week and produce tons of text files - but if they don’t do anything useful, would you consider me a programmer?


> Quality absolutely matters, but it's hyper context dependent.

There are very few software development contexts where the quality metric of “does the project build and run at all” doesn’t matter quite a lot.


It is hard to look at where it is going when there are so many lies about where the tech is today. There are extraordinary claims made on Twitter all the time about the technology, but when you look into things, it’s all just smoke and mirrors, the claims misrepresent the reality.

What a silly take. Where the tech is is extremely relevant. The reality of this blog post is it shows the tech is clearly not going anywhere better either, as they seem to imply. 24 hours of useless code is still useless code.

This idea that quality doesn't matter is silly. Quality is critical for things to work, scale, and be extensible. By either LLMs or humans.


People that spend time poking holes in random vendor claims remind me of folks you see video of standing on the beach during a tsunami warning. Their eyes fixed on the horizon looking for a hundred foot wave, oblivious to the shore in front of them rapidly being gobbled up by the sea.

> oblivious to the shore in front of them rapidly being gobbled up by the sea

Am I misunderstanding this metaphor? Tsunamis pull the sea back before making landfall.


This looks awesome - my son loves gears, and my wife and I have been talking about buying him a 3D printer soon. Thank you!


You are a great writer - thanks for putting this together!


I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.


That one got me as well - some pretty wild stuff about prompting the compiler, starship on the moon, and then there's SQLite 4.0


You can criticize it for many things but it seems to have comedic timing nailed.


Needs a dang archetype, who merges similar posts.


that is a great idea.


thanks! love the app, it's really fun, and surprisingly engaging, despite knowing that it's all AI nonsense


What an awesome piece of technology. I've been wanting to create something similar, just on the technical merits. We have some pretty amazingly capable technology these days, but so much of it relies on IP infrastructure, which is fine when things work and you are either aligned with your government, or live in a society where there are strong checks and balances on government overreach.


Exactly. With Chat Control being revived again in the EU, various VPN bans being proposed in US states, and ID verification rolling out seemingly everywhere, this kind of tech may end up being more useful than people expect. If it works in the extremely adversarial environment of a warzone, it should work fine here.


How is this a solution to Chat Control and EU law? If this is used, governments will simply demand Apple and Google get the app declared forbidden, which both have done to apps for many reasons.

Worse: they might demand a list of people who have it installed (and this violates the Chat Control law of course).

Even worse: this app turns out to be written by a security agency or scammers and starts exploiting people.


If they are demanding a list of people who have apps installed, you have two options: lie down like a dog or get in the streets and fight. If you think it’s going to get to that point, you need tools like this even more.


Why is chat control controversial? It seems like the same people afraid of this are the same people outraged when people then use private chat to do bad things.


The thing that I really like about the approach taken by OP is that it AFAIK is broadcast-only, up to a certain radius. The hard part in mesh networking is routing, and broadcast sidesteps that


I think you're misreading the situation. As far as I can tell, Russia has every reason to want to continue engaging in heavy cyber-criminal activities. I don't think this is the virtuous Kremlin turning a blind eye. This is a classic case of deception. Look at my left hand, so you don't see what my right is doing.


They see it as asymmetrical warfare, I know that; but if US would let US cyber criminals steal millions of Russian and Chinese credit cards or some other PII, I would perceive that as distasteful and not as a form of counterintelligence.


Considering that America has allowed hundreds of billions of dollars of money belonging to individuals in Russia to be stolen, I do not see the validity of your argument.


It was caused by unprovoked and illegal invasion of neighbouring country, that's tit for tat. They also seized assets of Nazi Germany when Hitler decided to go full yolo.


These are just words, excuses that hide the fact that they illegally stole funds that did not belong to them.

Can I read more about the seizure of German bank accounts? As far as I know, they (USA) continued to sponsor Germany until 1944. (through various branches.)

Some Swiss banks serviced Nazi accounts until 2020 and did not see any problems in this.

A tit for tat would have been Iraq had seized American assets in response to the US invasion and the theft of Iraq's gold assets.

Remind me, when did Russia invade the USA or the EU?)


I'm inclined to think you're right, but I can't figure out one thing - the command module (apparently) in Apollo 13 got down to 38F without active heating. That's much colder than standard data centre rack temps.

In the example of a data centre, there would be considerably more heat generation than 3 astronauts, but, I would like to understand more. 38F is cold, so heat is clearly lost not as slowly as we might think.


The Apollo passive radiators can dissipate ~2500 Watts into space. With most systems shut down, only ~500 Watts was coming from the remaining systems and the astronauts bodies.


Cool, thank you. So I read this as fundamentally, the heat they dissipated far exceeded the heat they produced. Do you mind opining on what similar figures would be with modest passive radiators and a typical data centre rack heat output?


No idea what the passive radiators might look like (50x the size of Apollo?), but an Nvidia GB300 NVL72 uses 120,000 watts.


Anecdotally, I think you should disregard this. I found out about this issue first via Reddit, roughly 30 minutes after the onset (we had an alarm about control plane connectivity).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: