Their headcount was around 10,000. Before AI, do you think each additional employee after 10,000th would increase the profit?
- if yes, then why didn't they hire more employees?
- if no, then isn't it obvious that they don't need more than 6,000 employees who are approximately 20% more productive? if the 6,001th employee can add profit then surely 10,001th could've also added right?
> Thanks to LLMs, each worker can do twice the work they could before. Naturally we are firing half the company because ... business is good and ... too much productivity is bad
this is an incorrect take. The company needs a certain amount of productivity at each point.
If not, how would you explain that they had only 10,000 employees and not 20,000? They could still remain profitable.
LLM's increased productivity and each person could do approximately 20% more work so it follows that they need fewer people. If not, they should have had 12,000 to begin with.
I agree if they weren’t simultaneously claiming to be a successful growing company.
> they should have had 12,000 to begin with
This is how successful growing companies work. They hire as many people as they can afford. Those people bring in more money to hire more people, and repeat.
A successful growing company has more opportunity than resources.
Reducing resources while also claiming to have un-captured opportunity makes no sense
Everything I said was based off of jack's post, as I quoted it. If you take issue with the non-specificity ot think he was being less than honest - take it up with jack.
exactly - end consumers like you and me will end up having to pay for their jobs indirectly.
i personally want products i purchase to be cheaper and i don't want to be paying for products that are costly simply because they are hiring people for "human wellbeing".
i would rather people work in productive places than just exist in a company for some reason.
more money for doing nothing? i don't want to live in a world like that. what part of this is not clear?
two options
- the 4000 employees can still be employed in block - thats around $600,000,000 that goes into literally no value and this is price borne by us consumers
- or the 4000 employees get fired and work in different companies that actually require them so that we as consumers can actually buy more products
by choosing option 1, you not only accept that as consumers we pay more for the product, but also miss out on other valuable work the 4000 employees can do. no good economy runs this way.
I actually don’t think you do. At the least, the question then becomes “is wellbeing most well served by paying a small number of software engineers a lot of money”. That is prima facie absurd.
the motive is probably more depressing. a normal human who just wants human interaction. people interacting with something "you" wrote just feels nice and people like that stuff.
I don't think Ed doesn't comment about the actual tech. Here are some things he has said before and please tell me if these still hold in the spirit?
> You cannot "fix" hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.
ChatGPT is fairly reliable.
>Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.
This is untrue in spirit.
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
Imagine if they’d done something else.
Imagine if they’d done anything else.
Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.
Imagine, because right now that’s the closest you’re going to fucking get.
This is what he said in 2024. He really thought ChatGPT is not in the future.
There are so many examples and its clear that he's not good faith and has consistently gotten the spirit wrong.
> With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.
I'm honestly not sure how this issue could be solved. Like, fundamentally LLMs are next (or N-forward) token predictors. They don't have any way (in and of themselves) to ground their token generations, and given that token N is dependent on all of tokens (1...n-1) then small discrepancies can easily spiral out of control.
To solve it doesn't mean we have to eliminate it completely. I think GPT has solved it to enough extent that it is reliable. You can't get it to easily hallucinate.
It depends on how much context is in the training data. I find that they make stuff up more in places where there isn't enough context (so more often in internal $work stuff).
at the previous productivity it was 10,000 employees. not 10,001 nor 9,999.
at the current productivity it is 6,000.
why are you so sure that the 6,001th employee can increase profits but not the 10,001th employee before AI?
reply