Hacker Newsnew | past | comments | ask | show | jobs | submit | more pton_xd's commentslogin

> Full disclosure: Anthropic gave me a few months of Claude Max for free. They reached out one day and told me they were giving it away to some open source maintainers.

Related, lately I've been getting tons of Anthropic Instagram ads; they must be near a quarter of all the sponsored content I see for the last month or so. Various people vibe coding random apps and whatnot using different incarnations of Claude. Or just direct adverts to "Install Claude Code." I really have no idea why I've been targeted so hard, on Instagram of all places. Their marketing team must be working overtime.


I think it might be that they've hit product-market fit.

Developers find Claude Code extremely useful (once they figure out how to use it). Many developers subscribe to their $200/month plan. Assuming that's profitable (and I expect it is, since even for that much money it cuts off at a certain point to avoid over-use) Anthropic would be wise to spend a lot of money on marketing to try and grow their paying subscriber base for it.


I just don’t see how they can build a moat.

I don’t feel like paying for a max level subscription, but am trying out MCP servers across OpenAI, Anthropic etc so I pay for the access to test them.

When my X hour token allotment runs out on one model I jump to the next closing Codex and opening Claude code or whatever together with altering my prompting a tiny bit to fit the current model.

Being so extremely fungible should by definition be a race to zero margins and about zero profit being made in the long run.

I suppose they can hope to make bank the next 6-12 months but that doesn’t create a longterm sustainable company.

I guess they can try building context to lock me in by increasing the cost to switch - but this is today broken by me every 3-4 prompts clearing the context because I know the output will be worse if I keep going.


The money right now is in company subscriptions. If I went to my employer and said I can save an hour of work per day with a Claude Code subscription (and they have any sense at all) they should laugh at that $200/month price tag and pay up in an instant.

The challenge is definitely in the competition though. GPT-5-Codex offered _very_ real competition for Claude Sonnet 4 / Opus 4 / Opus 4.1 - for a few weeks OpenAI were getting some of those subscribers back until Sonnet 4.5 landed. I expect that will happen frequently.


> Many developers subscribe to their $200/month plan.

There's no way "many developers" are paying $2,400 annually for the benefit of their employers.

There's no way companies are paying when they won't even fork out $700 a year for IntelliJ instead pushing us all onto VSCode.


https://www.anthropic.com/news/anthropic-raises-series-f-at-...

> At the beginning of 2025, less than two years after launch, Anthropic’s run-rate revenue had grown to approximately $1 billion. By August 2025, just eight months later, our run-rate revenue reached over $5 billion.

Claude Code launched in February 2025. Anthropic's annual run-rate revenue grew from $1bn to $5bn by August.

They haven't published a breakdown of this but I suspect a significant portion of that revenue growth came from their $200/month plan.

It would help explain why seemingly every other LLM company has pivoted to focus on "LLMs for code".


My company is paying for Claude Max for me and a dozen other developers. The others are using the API. If their API usage cost hits a level where it's cheaper to move them to Max, they're moved to Max.

There's no hard mandate to use Claude Code, but the value for us is clear to exec management and they are willing to foot the bill.


What makes it better than VSCode Co-pilot with Claude 4.5? I barely program these days since I switched to PM but I recently started using that and it seems pretty effective… why should I use a fork instead?


There’s really no functional difference. The VSC agent mode can do everything you want an agent to do, and you can use Claude if you like. If you want to use the CLI instead, you can use Claude Code (or the GitHub one, or Codex, or Aider, or…)

I suspect that a lot of the “try using Claude code” feedback is just another version of “you’re holding it wrong” by people who have never tried VSC (parent is not in this group however). If you’re bought into a particular model system, of course, it might make more sense to use their own tool.

Edit: I will say that if you’re the YOLO type who wants your bots to be working a bunch of different forks in parallel, VSC isn’t so great for that.


I think a lot of that feedback is simply an element of how fast the space is moving, and people forming their impressions at different stages of the race. VSCode Copilot today is a wholly different experience than when it first launched as an advanced auto-complete.


I agree. People either have never tried it, or tried it a long time ago when it was something else.


No, there’s pretty noticeable difference between different tools even when they use the same model and interaction pattern. For instance I’ve used both GitHub Copilot and Cursor interactive agents (which are basically the same UX) aplenty in the past couple months for comparison, and GH Copilot is almost always dumber then Cursor, sometimes getting stuck on the stupidest issues. I assume context construction is likely the biggest culprit; Cursor charges by tokens while GH Copilot charges by request, so GHC attempts to pass as little as possible (see all the greps) and then fail a lot more. Plus its patching algorithm has always been shit (I’ve used GHC since it came out as better autocomplete).


Meh. The context stuff is changing by the day, so whatever you're saying now will be out of date by next week. Regardless, you're basically saying that GHC is trying to optimize for cost, which is true for any provider.

Even if there's some slight immediate performance advantage for Cursor over GHC, the ability to trivially switch models more than makes up for it, IMO.


The question was whether Claude Code's better than GHC. "They may release a new version that bridges the gap any moment now" is a completely useless answer to that. And your argument is "people either have never tried it, or tried it a long time ago when it was something else", and I told you I'm comparing it right now, and have done the same a year ago, and many points in between, and GHC is inferior at every single point, and it's not slight. Cursor etc. wouldn’t have been this big if GHC was only slightly behind when it has such a huge early mover advantage and enormous backing.


I've used both, and you're exaggerating. Whatever difference in performance there is between providers changes constantly, and like I said, it's more than offset for me by the practical advantage of being able to switch models regularly.


Claude Code is not a VSCode fork, it's a terminal CLI. It's a rather different interaction paradigm compared to your classical IDE (that said, you can absolutely run Claude Code inside a terminal inside VSCode).


Ah, I think I’m getting it confused with Cursor. So Claude Code is a terminal CLI for orchestrating a coding agent via prompts? That’s different than the initial releases of VSC copilot, but now VSC has “agent” mode that sounds a lot like this. It basically reduces the IDE to a diff viewer.


There is now also copilot-cli that's a clone of Claude Code and by default runs with Sonnet 4.5. I haven't spent much time with it on really complex stuff yet but it's a nice option to have available if you have a Copilot Business/Enterprise plan at work.

https://github.com/github/copilot-cli


VSCode Copilot relies on a lot of IDE-isms for prompt management, which I find cumbersome. The cli agents generally just ingest markdown files in various directory structures which is just less friction for me. Also they are more similar to one another, ish, whereas vscode mostly stands alone (except it supports agents.md now).

It also lacks a lot of the “features” of CC or Codex cli, like hooks, subagents, skills, or whichever flavor of the month you are getting value out of (I am finding skills really useful).

It also has models limited to 128k context - even sonnet - which under claude has (iirc) a million tokens. It can become a bottleneck if you aren’t careful.

We are stuck with vscode at $job, and so are making it work, but I really fly on personal projects at home using the “Swiss army knife “.

There are of course good reasons for some to prefer an ide as well, it has strengths. Like much more permissible limits and predictable cost.


Copilot as a harness for the model is generic enough to work with every model. Claude sonnet, haiku, and opus are trained with and specifically for Claude code in mind.

Also, as a heavy user of both, there are small paper cuts that seriously add up with copilot. Things that are missing like sub agents. The options and requests for feedback that cc can give (interactive picker style instead of prompt based). Most annoyingly commands running in a new integrated vscode terminal instance and immediately mistakenly "finishing" even though execution has just begun.

It's just a better harness than copilot. You should give it a shot for a while and see how you like it! I'm not saying its the best for everybody. At the end of the day these issues turn into something like the old vi/emacs wars.

Not sponsored, just a heavy user of both. Claude code is not allowed at work, so we use copilot. I purchased cc for my side projects and pay for the $125/m plan for now.


I believe you that Claude Code is a better harness, but I'm skeptical it's worth learning because I'm certain that VSCode (Microsoft) will catch up to it eventually. There's nothing differentiated about it, and VSC has already closed the gap. As much as I dislike encouraging BigTech hegemony, it seems obvious which horse to bet will win this race...


Agents, skills etc. Stuff that's specific to the Claude CLI tooling and not the model.

Sonnet 4.5 as a raw model is good, but what makes it great is the tool that calls it.

Think of it like this: Sonnet 4.5 is the engine, but the whole car around it matters a LOT.

Copilot is kinda crap as a LLM tool, the tool calling permissions are clunky, it doesn't support sub agents or skills or anything fancy really. The best thing about it is that it can see the "problems" tab on VSCode provided by different addons and linters and you can tell an agent "fix the active problems" and it'll get to work.


I find copilot simply worse at "understanding" codebases than claude code


> since even for that much money it cuts off at a certain point to avoid over-use

if anything this just confirms that the unit economics are still bad


Unfortunately that might also be due to how Instagram shows ads, and not necessarily Anthropic's marketing push. As soon as you click on or even linger on a particular ad, Instagram notices and doubles down on sending you more ads from the same provider as well as related ads. My experience is that Instagram has 2-3 groups of ad types I receive which slowly change over the months as the effectiveness wanes over time.

The fact that you are receiving multiple kinds of ads from Anthropic does signify more of a marketing push, though.


Look at the node.js APIs: readFile, readFileSync, writeFile, writeFileSync ... and on and on. If that's not a meme then I don't know what is.


And the alternative without async-await is ? blocking the event loop or the callback pyramid.

Node is one place where async-await has zero counter arguments and every alternative is strictly worse.


The problem with Node is that the async decision is in the hand of the leaf node, which bubbles up to the parent where my code sits. Async/await is nice and a goal in most modern Node, but there are codebases (old and new) where async/await is just not an option for many reasons.

Node dictates that when faced with an async function the result is that I must either implement async myself so I can do await or go into callback rabbit holes by doing .then(). If the function author is nice, they will give me both async and sync versions: readFile() and readFileSync(). But that sucks.

The alternative would be that 1) the decision to go async were mine; 2) the language supports my decision with syntax/semantics.

Ie. if I call the one and only fs.readFile() and want to block I would then do

       sync fs.readFile()
Node would take care of performing a nice synchronous call that is beneficial to its event-loop logic and callback pyramid. End of the story. And not some JS an implementation such as deasync [1] but in core Node.

1. https://www.npmjs.com/package/deasync


> And the alternative without async-await is ? blocking the event loop or the callback pyramid.

No, just callbacks and event handlers (and an interface like select/poll/epoll/kqueue for the OS primitives on which you need to wait). People were writing threadless non-blocking code back in the 80's, and while no one loved the paradigm it was IMHO less bad than the mess we've created trying to avoid it.

One of the problems I'm trying to point out is that we're so far down the rabbit hole in this madness that we've forgotten the problems we're actually trying to solve. And in particular we've forgotten that they weren't that hard to begin with.


They could have added threads to Node as well? Granted, it would have been a lot of difficult work.


Losing threads and moving to the async I/O model was the motivation behind Node in the first place.

https://nodejs.org/en/about


If you use async I/O you can just use the Chrome JavaScript runtime as-is. I would claim it was the only low-effort model available to them and therefore not motivation.

The motivation for node was that users wanted to use JavaScript on the server.


> If you use async I/O you can just use the Chrome JavaScript runtime as-is.

What do you mean? A JS runtime can't do anything useful on its own, it can't read files, it can't load dependencies because it doesn't know anything about "node_modules", it can't open sockets or talk to the world in any other way - that's what Node.js provides.

> I would claim it was the only low-effort model available to them and therefore not motivation.

It was a headline feature when it released.

https://web.archive.org/web/20100901081015/https://nodejs.or...


Obviously you can add modules calling to C/C++ functionality to a scripting language runtime easily (and the interface to do that is already available for the browser implementation).

In the above link Node could be described as a Chrome V8 distribution with modules enabling building a web server.

Adding threading to a non-threaded scripting runtime is another ball game.

The point is that Node was forced into this model by V8 limitations, then sold it as an advantage, however, it is only one way to solve the problem with its own trade-offs and you have to look at the specific use case you are looking at to see if it is really the best solution for your use case.


> Obviously you can add modules calling to C/C++ functionality to a scripting language runtime easily

Yes, obviously, that's what NodeJS does. But you can't "just use the V8 runtime as-is if you're doing async IO", it doesn't have those facilities at all.

Async IO wasn't just "sold as an advantage", it is an advantage. Websockets were gaining popularity around that time and async IO is a natural fit for that.

You would have to change the language and boil the ocean to make the runtime support multiple threads (properly).

But why? Just to end up with the inferior thread-per-request runtime (which by the way, still needs to support async because it's part of the language), that requires developers to write JS which is incompatible with browser JS, which would've eliminated most of the synergy between the two?

I really don't understand what you're going for here. I don't see a single advantage here.


I think green threads (Java Virtual Threads, Go to an extent) are strictly superior to async/await.

If you don't have many threads, OS threads are okay as well. It is all about memory and scheduling overhead.

But that is just my opinion. You are welcome to have a different opinion.


No I don't think your opinion is "wrong" or anything, it's just that this is a language-level limitation and not a valid criticism of NodeJS.


You mean like with web workers or something?


With a shared interpreter/process state, like Python, Java, C, C++, ...

Node is not a web page, so no reason to limit it to the same patterns.

Then, the next issue would be thread safety. But that could be treated as a separate problem.


But obviously an investor with $XYZ under management that can make N bets is better equipped to handle that risk than you, an individual with 1 bet.


fair point :)


My decade in the making habit of only reading HN comments is finally paying off. Nowadays when I do randomly skim an article it's almost always slop.


They're worried about paying for their next trip to the dentist. Not working when they're old is not in the picture.


Anecdotally I've experienced something similar.

After I started committing, really committing to consistently working out, a lot of other things fell into place more or less automatically. I stopped drinking, started eating very cleanly (I became ravenously hungry; junk food and sweets aren't appealing anymore), and stopped spending as much time on gaming. I know your broader point is about leaning into discomfort, but specifically leaning into exercise seems to bring extra benefits. Exercise is medicine, as they say.


I think for this to work psychologically, it just needs to be something difficult or uncomfortable that you can do an awful lot of in a way that is sustainable, and doesn't actively harm you... all the better if you actually benefit directly from it, like with exercising, but cold showers work just as well, simply because they're uncomfortable and take much less time than working out- I personally do both.


It's been months since I've booted into Windows to play a game. Feels amazing. The only exception I've run into is heavy anticheat titles, like trying to play on Faceit CS2 servers.

I can live without that though. I don't think I'll bother setting up a Windows partition on my next PC.


I think the last time I had Windows installed was right around the release of Proton in 2018. Those initial releases were not great but the trend was very obvious.

There comes a point were you just don't miss it. The only moments that it is apparent is that disassociation you have when someone else just assumes you run Windows. I don't blame them, I am statistically the odd one there, but that is when you have to figure out things your way.


The fine tuning will continue until we reach AGI.


The fine tuning will continue until we reach the torment nexus, at best


> Discipline is the only thing that matters in schools

I thought it was parenting? This study claims that "parental involvement is a more significant factor in a child’s academic performance than the qualities of the school itself." [0]

I couldn't find better sources on my phone but this is a theme I've heard repeated over and over throughout my life. Parenting makes the difference.

[0] https://news.ncsu.edu/2012/10/wms-parcel-parents/


Agree; there's some salient truth to be found here but I can't quite put my finger on it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: