Hacker Newsnew | past | comments | ask | show | jobs | submit | meowface's commentslogin

A lot of people are getting good results using it on hard things. Obviously not perfect, but > 50% success.

That said, more and more people seem to be arriving at the conclusion that if you want a fairly large-sized, complex task in a large existing codebase done right, you'll have better odds with Codex GPT-5.2-Codex-XHigh than with Claude Code Opus 4.5. It's far slower than Opus 4.5 but more likely to get things correct, and complete, in its first turn.


yes, i also get good results. that's why i use it on the hard things.

It's a half-joke. No need to take it that seriously or that jokingly. It's mostly only grifters and cryptocurrency scammers claiming it's amazing.

I think ideas from it will probably partially inspire future, simpler systems.


It may be a joke in the same way that brogramming was a joke and somehow became an enduring tech bro stereotype

Strong agreement with this. The whimsical, fantasy, fun, light hearted things are great until a large enough group of people take them as a serious life motto & then try to push it on everyone else.

Taking the example of the cryptocurrency boom (as a whole) as the guide, the problem is the interaction of two realities: big money on the table; and the self-fulfilling-prophecy (not to say Ponzi) dynamic of needing people to keep clapping for Tinker-bell, in greater and greater numbers, to keep the line going up. It corrupts whimsical fun and community spirit, it corrupts idealism, and it corrupts technical curiosity.

stevey already made $300K from cryptocurrency grift on Gas Town. Read his blog post about it.

Complete with a "Let’s goooooooo!"

And FOMO stories about missing out on Bitcoin when he knew about it, so he doesn't want you to miss out on this new opportunity to get "filthy rich" as an "investor" while you still can.


More details on the pump and dump scheme he joined in on promoting and drew money from: https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

MOOLLM has its own official currency -- MOOLAH!

https://github.com/SimHacker/moollm/tree/main/skills/economy...

The official currency of MOOLLM is MOOLAH. It uses PROOF OF MILK consensus — udderly legen-dairy interga-lactic shit coin, without the bull.


This initially sunk my heart, but in all his replies there are like 50 very clearly unintelligent crypto grifters telling him he needs to be killed for scamming them, so I am unsure who to root for at this point. It's depressing he accepted it, but I might partially forgive it due to him making a lot of them lose money.

[flagged]


Why is it hard to criticize people for being part of a scam operation? It's so morally and ethically bankrupt that it's really easy and valid to criticize someone for

Who is being scammed? The only people buying into tokens as obscure as these are degenerate gamblers who know very well that it's not any kind of an investment.

That sounds like victim blaming to me

It's not a scam, there's no misrepresentation. This very clearly isn't marketed as an investment of any kind https://bags.fm

https://apps.apple.com/app/bags-financial-messenger/id647319...

The tagline of the app? "Buy & sell memecoins". Transparently advertised as a crowdfunding mechanism using memecoins.


Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

What? Of course it's marketed as an investment. That's the sole thing it's marketed as. Are you not able to lift the thinnest veil imaginable?

Because you'd be aiding and abetting a pennystock scam.

The difference between bags.fm and pennystock scams is that bags.fm is very obviously not marketed as an investment, but a crowdfunding tool.

It's absolutely marketed as an investment, and solely used and referenced by people saying it is an investment. This is like saying those cannabis paraphernalia shops are marketed as only for tobacco.

Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

But people do. There are people who genuinely think crypto is an investment. Yes, smart people knows it is just a grift and that it is just about selling it on to the next person before it crashes. But is it moral to make money on stupid people? Many people lose all their money on gambling even if we always known gambling is a loss.

> There are people who genuinely think crypto is an investment.

Sure! Are those people buying bags.fm tokens? Probably not.

This isn't even marketed as an investment https://bags.fm but a crowdfunding tool for developers with a casino attached.

You don't have to be smart to read the big text on the website.


You don't have to be smart to understand they're very, very, very obviously saying it's an investment and using extremely superficial cover. All things like these are exclusively pennystock scams.

You're being bamboozled. Google the name of it. Search it on Twitter and 4chan. Watch any Coffeezilla video.


I'm googling "bags.fm", everything I can find is about money going to creators. Literally nothing suggesting that you're going to get rich by buying these tokens.

Searching for "bags.fm" on X with keywords like "invest" or "rich" or "moon" also does not seem to return any conversations referring to anyone but the creators getting rich.

I can't find any bags.fm references on 4chan, and searching for gas town instead doesn't seem to bring up anything cryptocurrency related in the archive.

> You're being bamboozled

I don't think so. I suspect the world is so full of crypto scams that when someone does something explicitly non-scammy ("Hey, here's a crypto thing you can use to give me free money!") people still incorrectly view it as scammy because of crypto.

How many memecoin "investors" do you think view these as serious investments? I suspect essentially none of them.

How many memecoin "investors" are degenerate gambling addicts who need treatment? Probably most of them.

Taking money from vulnerable gambling addicts is certainly not ideal, but it's far from scammy.


Yegge himself wrote a blog post to his non crypto audience calling it an investment that he hopes makes its investors filthy rich. He pumped it, then he dumped it, and announced he’s walking away from it at that point after taking his profits and crashing its value.

I don’t know why you’re talking about existing hardcore BAGS addicts when the topic is Yegge promoting a crypto grift to his own general audience as an investment and then running the typical pump and dump scam on them.


It's a scam or a pennystock grift or whatever term you want to use.

https://x.com/Fizzy__01/status/1956006313848397861

100% of these things are somewhere on the scam and fraud spectrum. An unscrupulous person creates a token or a platform for creating tokens with the goal of raising the worthless token's price so they can parasitically make millions from something that holds zero value.

The "fund creators" thing is a common ploy. If they actually wanted to do that, they'd make it so you can only donate with dollars or stablecoins.

Look at the dozens of replies to all of Yegge's posts, now: https://x.com/Steve_Yegge/status/2014530592134910215


Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

I don't get crypto - just looked up how a couple of most performant stocks did in the past decade, and I'm pretty sure you could outperform BTC with the same amount of risk tolerance.

The swings on BTC price are absolutely insane, and ETH even more so (which is even more risky, without showing higher gains).


what the? how do you sell crypto based on a description of an orchestration framework?

donations?


People keep giving him the benefit of the doubt. "He's clearly on to something, I just don't know what". I know what. The hustle of the shill. He has long gone from 'let's use a lot of tokens' to seeking a high score. He disgusts me.

What high score?

What does he have to gain? This is Deno: https://deno.com


What is incorrect or bad about his statement?


Your posts here remind me of Trumpists citing random Twitter leftists as Democratic party leaders.


Lol. "random leftists"

First two come directly from OpenAI, Anthropic and others

Last one is literally made rounds even on HN e.g. Klarna bringing back their support staff after they tried to replace them with AI.


Last one is irrelevant. Of course some companies are miscalculating.

OpenAI never claimed they had achieved AGI internally. Sam was very obviously joking, and despite the joke being so obvious he even clarified hours later.

>In a post to the Reddit forum r/singularity, Mr Altman wrote “AGI has been achieved internally”, referring to artificial general intelligence – AI systems that match or exceed human intelligence.

>Mr Altman then edited his original post to add: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.”

Dario has not said "we are months away from software jobs being obsolete". He said:

>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

He's maybe off by some months, but not at all a bad prediction.

Arguing with AI skeptics reminds me of debating other very zealous ideologues. It's such a strange thing to me.

Like, just use the stuff. It's right there. It's mostly the people using the stuff vs. the people who refuse to use it because they feel it'll make them ideologically impure, or they used it once two years ago when it was way worse and haven't touched it since.


The insecurity is mind-boggling. So many engineers afraid to touch this stuff for one reason or another.

I pride myself in being an extremely capable engineer who can solve any problem when given the right time and resources.

But now, random unskilled people can do in an afternoon what it might have taken me a week or more to do before. Of course, I know their work might be filled with major security issues, or terrible architectural decisions and hidden tech debt that will eventually grind development to a complete halt.

I can be negative and point out these issues, or I can adopt these tools myself, and have the skilled hand required to keep things on rails. Now what I can do in a week cannot be matched by an unskilled engineer in an afternoon, because we have the same velocity multipliers.

I remember being such a purist in my youth that I didn't even want autocomplete or intellisense, because I feared it would affect my recall or stunt my growth. How far we have come. How I code has changed completely in the last year.

I code 8-20 hours a day, all day. I actively work on several projects at once, flipping between contexts to check results, review code, iterate on design/implementation, hand off new units of work to various agents. It is not a perfect process, I am constantly screaming and pulling my hair out over how stupid and forgetful and stubborn these tools can be sometimes. My output has still dramatically increased, and I have plenty extra time to ensure the quality of the code is secure and good enough.

I've given up on expecting perfection from code I didn't write myself; but what else is new? Any skilled individual who has managed engineers before knows you have to get over this quickly and accept that code from other engineers will not match your standards 100%.

Your role is to develop and enforce guidelines and processes which ensure that any code which hits production has been thoroughly reviewed, made secure and performant. There might be some stupid inline metacomments from the LLM that slip through, but if your processes are tight enough, you can produce much more code with correct interfaces, even if the insides aren't perfect. Even then, targeted refactors are more painless than ever.

Engineers who only know how to code, and at a relatively mediocre level, which I imagine is the majority of engineers now in the field who got into it because of the money, are probably feeling the heat and worried that they won't be employable. I do not share that fear, provided that anyone at all is employable.

When running a business, you'll still need to split the workload, especially as keeping pace with competition becomes an increasingly brutal exercise. The money is still in the industry, and people with money will still find ways to use it to develop an edge.


The AI bubble will pop any month now.

See? I can do this too.


Steve Yegge is not an idiot or a bad programmer. Possibly just hypomanic at most. And a good, entertaining writer. https://en.wikipedia.org/wiki/Steve_Yegge

Gas Town is ridiculous and I had to uninstall Beads after seeing it only confuse my agents, but he's not completely insane or a moron. There may be some kernels of good ideas inside of Gas Town which could be extracted out into a better system.


> Steve Yegge is not an idiot or a bad programmer.

I don't think he's an idiot, there are almost no actual idiots here on HN in my opinion and they don't write such articles or make systems like Steve Yegge. I'm only commenting about giving more tools to idiots. Even tools made by geniuses will give you idiotic results when used by actual idiots, but a lot of smart people want to lower barriers of entry so that idiots can use more tools. And there are a lot of idiots who were inactive just because they didn't have the tools. Famous quote from a famous Polish essayist/futurist Stanisław Lem: "I didn't know there are so many idiots in this world until I got internet".


Even if I looked past the overwrought, self-indulgent Mad Max LARP (and the poor judgment evidenced by the prioritization of world-building minutia while the basic architecture is imploding), the cost of finding those kernels in a monstrosity of this size negates any ROI. 189k lines in four weeks will inevitably surface interesting pattern combinations — that's not merit, that's sample size. You might as well search the Library of Babel; at least the patterns are guaranteed to exist there.

The other problem with that reasoning is that whatever patterns ARE interesting are more likely to be new to AI-assisted coding generally – meaning a cleaner system built for the same use case will surface them without the archaeological dig, just by virtue of its builder having the skill to design it (and crucially, being more interested in designing it than in creating AI drawings of polecats in steampunk-adjacent garb).

I'm also a bit curious about at which point you start considering someone an idiot when they keep making objectively idiotic moves – the whimsical Disneyfied presentation, the "please don't download this" false modesty while keeping the repo public, the inexplicable code growth all come from the same place. They're not separate quirks: they're the same inability to edit, the same need for immediate audience validation, the same substitution of volume and narrative for actual engineering discipline. Someone who thinks "Polecats" and "Guzzoline" are good names for production abstractions is not suddenly going to develop the editorial rigor to scrap a codebase and rebuild.

Which is why it's worth remembering that Yegge's one successful shipped project was Grok, an internal tool used by Google engineers, so Yegge seems to have bought his own hype, missing how much of that project's success was likely subsidized by its user base comprising people skilled enough to route around its limitations.

These days he seems to be building for developers in general, but critically might be missing that actual developers immediately clock the project's ineptitude + Yegge's immature, narcissistic prioritization and peace the fuck out. The end result of this is filtering for the self-described vibe-coder types, people already Dunning-Krugered enough to believe you can prompt your way into a complete system without knowing how to reason about that system in order to guide the AI.

Which, fittingly, is how you end up with users who can't even follow "please don't download this yet".


From my heavy experience using every frontier model for a year now, LLMs are actually probably much, much better at nuanced data migration problems specific to your stack than at a frontend user settings page. (Though still pretty good at both. And the user settings page will work, sure.)


I feel no strong need to convince others. I've been seeing major productivity boosts for myself and others since Sonnet 3.5. Maybe for certain kinds of projects and use cases it's less good, maybe they're not using it well; I dunno. I do think a lot of these people probably will be left behind if they don't adopt it within the next 3 years, but that's not really my problem.


What's there to be left behind on? That's like arguing people who stick to driving cars with manual transmissions are going to get left behind when buses "inevitably get good."

The whole point of the AI coding thing is that it lets inexperienced people create software. What skill moat are you building that a skilled software developer won't be able to pick up in 20 minutes?


(Dont take this as an attack on you personally, just on the sentiment you are giving in your comment)

The attitude you present here has become my litmus test for who has actually given the agents a thorough shake rather than just a cursory glance. These agents are tools, not magic (even though they appear to be magic when they are really humming). They require operator skill to corral them. They need tweaking and iteration, often from people already skilled in the kinds of software they are trying to write. Its only then that you get the true potential, and its only then you realize just how much more productive you can be _because of them_, not _in spite of them_. The tools are imperfect, and there are a lot of rough edges that a skilled operator can sand down to become huge boons for themselves rather than getting cut and saying "the tools suck".

Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.


> Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.

Lmao, and just as with google, they’ll eventually optimize it for ad delivery and it will become shit.


Your analogy is the wrong way around :)

Everyone now is driving automatic, LLMs are the manual transmission in a classic car with "character".

Yes, anyone can step into one, start it and maybe get there, but the transmission and engine will make strange noises all the way and most people just stick to the first gear because the second gear needs this weird wiggle and a trick with the gas pedal to engage properly.

Using (agentic) LLMs as coding assistants is a skill that (at the moment) can't really be taught as it's not deterministic and based a lot on feels and getting the hang of different base models (gemini, sonnet/opus, gpt, GLM etc). The only way to really learn is by doing.

Yes, anyone can start up Google Antigravity or whatever and say "build me a todo app" and you'll get one. That's just the first gear.


That is definitely not the point of it.


> "...these people probably will be left behind if they don't adopt it..."

And there it is, the insecure evangelism.


I'm not sure you understand what insecure means. Do you think it means people aren't able to have an opinion about other people's skills, values, attitudes, and behaviors, and what those might ultimately result in?


The linked blog post explains what the post's author means by "insecure evangelism" and the parent poster followed the pattern described. Perhaps you should direct your comment to the author.


The poll did not ask "Is it OK to be white?", it asked "Do you agree or disagree with this statement: 'It's OK to be white.'"


Not only that, Adams deceptively included the answer "I don't know" with "I disagree", and it STILL didn't add up to 50%. And it was an ideologically motivated Push Poll from Rasumssen Reports, a slanted right wing polling organization. A fair poll would never use a White Supremacist trolling slogan as a trick question with no explanation. The question doesn't even make any sense, and was asked with no context or definition of what "ok" means, so "I don't know" is the obvious correct answer.

https://en.wikipedia.org/wiki/Push_poll

"It's ok to be white" is a White Supremacist slogan specifically designed to troll and cause division and hatred, and Adams gleefully took that and ran with it, and lied and exaggerated to make his false racist point, just like negzero7 continue to do. What both Scott Adams and negzero7 did was PRECISELY what the White Supremacists who coined that slogan had hoped for.

https://en.wikipedia.org/wiki/It%27s_okay_to_be_white

>"It's okay to be white" (IOTBW) is an alt-right slogan which originated as part of an organized trolling campaign on the website 4chan's discussion board /pol/ in 2017.[1][2][3] A /pol/ user described it as a proof of concept that an otherwise innocuous message could be used maliciously to spark media backlash.[4][5] Posters and stickers stating "It's okay to be white" were placed in streets in the United States as well as on campuses in the United States, Canada, Australia,[6] and the United Kingdom.[7][5]

>The slogan has been supported by white supremacists and neo-Nazis.[2][1][8]

>In a February 2023 poll conducted by Rasmussen Reports, a polling firm often referred to by conservative media, 72% of 1,000 respondents agreed with the statement "It's okay to be White". Among the 130 black respondents, 53% agreed, while 26% disagreed, and 21% were unsure. Slate magazine suggested that some negative respondents may have been familiar with the term's links with white supremacy.[41] The Dilbert comic strip was dropped by many newspapers after author Scott Adams, reacting on his podcast to the outcome of this poll, characterised black people as a "hate group" for not agreeing with the statement and encouraged white people to "get the hell away from" them.[42]

And now negzero7 is purposefully trolling and spreading the same false divisive misinformation himself, so his racist White Supremacist motives are extremely clear and obvious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: