Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So AI companies are profitable when you ignore some of the things they have to spend money on to operate?

Snark aside, inference is still being done at a loss. Anthropic, the most profitable AI vendor, is operating at a roughly -140% margin. xAI is the worst at somewhere around -3,600% margin.



If they are not operating inference at a loss and current models remain useful (why would they regress?), they could just stop developing the next model.


At minimum they have to incorporate new data every month or the models will fail to know how many Shrek movies there are and become increasingly wrong in a world that isn't static.


That sort of thing isn't necessary for all use cases. But if you're relying on the system to encode wikipedia or the zeitgeist then sure.


They could, but that’s a recipe for going out of business in the current environment.


Yes, but at the same time it's unlikely for existing models to disappear. You won't get the next model, but there is no choice but to keep inference running to pay off creditors.


The interesting companies to look at here are the ones that sell inference against open weight models that were trained by other companies - Fireworks, Cloudflare, DeepInfra, Together AI etc.

They need to cover their serving costs but are not spending money on training models. Are they profitable? Probably not yet, because they're investing a lot of cash in competing with each other to R&D more efficient ways of serving etc, but they're a lot closer to profitability than the labs that are spending millions of dollars on training runs.


Where do those numbers come from?


Can you cite your source for inference being at a loss? This disagrees with most of what I've read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: