Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

at every productivity point there's an optimal number of employees needed.

at the previous productivity it was 10,000 employees. not 10,001 nor 9,999.

at the current productivity it is 6,000.

why are you so sure that the 6,001th employee can increase profits but not the 10,001th employee before AI?


Their headcount was around 10,000. Before AI, do you think each additional employee after 10,000th would increase the profit?

- if yes, then why didn't they hire more employees?

- if no, then isn't it obvious that they don't need more than 6,000 employees who are approximately 20% more productive? if the 6,001th employee can add profit then surely 10,001th could've also added right?


> Thanks to LLMs, each worker can do twice the work they could before. Naturally we are firing half the company because ... business is good and ... too much productivity is bad

this is an incorrect take. The company needs a certain amount of productivity at each point.

If not, how would you explain that they had only 10,000 employees and not 20,000? They could still remain profitable.

LLM's increased productivity and each person could do approximately 20% more work so it follows that they need fewer people. If not, they should have had 12,000 to begin with.


I agree if they weren’t simultaneously claiming to be a successful growing company.

> they should have had 12,000 to begin with

This is how successful growing companies work. They hire as many people as they can afford. Those people bring in more money to hire more people, and repeat.

A successful growing company has more opportunity than resources.

Reducing resources while also claiming to have un-captured opportunity makes no sense


> If not, how would you explain that they had only 10,000 employees and not 20,000?

Simple, 1000+ salaries > 10000 x100$/m Claude seats.


"they should not have had 12,000 to begin with"

Nailed it


> The company needs a certain amount of productivity at each point.

Um, no?


it does not work like that except in a berkeley mba mind

how do you layoff 40% quietly?

i don't think this is true.

assuming $150,000 average salary thats around $600,000 totally so that increases the yearly profit by about 30%.


While destroying morale, and increasing the difficulty of successfully recruiting later

I directly quoted jack - take it up with him.

> The company is profitable, and Jack could have kept 4000 people employed with no difference in outcome

did he suggest no difference in outcome in terms of profits?


You can check for yourself, if you RTFA.

Everything I said was based off of jack's post, as I quoted it. If you take issue with the non-specificity ot think he was being less than honest - take it up with jack.


i don't think you understood what i'm saying nor what he's saying. you don't do a layoff without accepting a change in the outcome.

they are now estimating 18% instead of 17%

fundamentally you see jobs as more important than the end product. this is a tension i keep finding in many minds.

I see fundamentally human wellbeing as more important. Jobs are just the structure society has built as a gateway for this.

exactly - end consumers like you and me will end up having to pay for their jobs indirectly.

i personally want products i purchase to be cheaper and i don't want to be paying for products that are costly simply because they are hiring people for "human wellbeing".

i would rather people work in productive places than just exist in a company for some reason.


Alternatively, instead of things being cheaper, you could receive that amount more money.

who? the employees? for doing what? i don't want to live in a world where people are getting paid when they don't add any value

I too spend my life ensuring that my only purpose in life is creating shareholder value

you are literally complaining about the idea of having more money

more money for doing nothing? i don't want to live in a world like that. what part of this is not clear?

two options

- the 4000 employees can still be employed in block - thats around $600,000,000 that goes into literally no value and this is price borne by us consumers

- or the 4000 employees get fired and work in different companies that actually require them so that we as consumers can actually buy more products

by choosing option 1, you not only accept that as consumers we pay more for the product, but also miss out on other valuable work the 4000 employees can do. no good economy runs this way.


Then I suggest fixing society or finding a better one

I actually don’t think you do. At the least, the question then becomes “is wellbeing most well served by paying a small number of software engineers a lot of money”. That is prima facie absurd.

AI comment

Obviously so, yeah. Astroturfing? Or is there any other reason why this is becoming so common in HN?

I'm not a native speaker. Besides dash, what is the sign that it's AI?

I can’t point the exact signs because the message got removed, but a common sign is labeled paragraphs:

“My take: so and so.”

“The key idea: so and so.”

There are also some common sentence structures, like the format “it’s not A, it’s B”. For example, “this is not important; it is essential”.

Some words also tend to appear very frequently, like the verb pretend: “this is John no longer pretending he’s dumb”.

Any of those examples could appear in legitimate human text, but when you see many of those signs in a short text it’s very obvious.


Karma farming to frontpage more AI news and startups?

Massaging sentiment

Claw and people who haven't realized ns;nt

This article is 100% AI generated. I confirmed with pangram.

the motive is probably more depressing. a normal human who just wants human interaction. people interacting with something "you" wrote just feels nice and people like that stuff.

I don't think Ed doesn't comment about the actual tech. Here are some things he has said before and please tell me if these still hold in the spirit?

> You cannot "fix" hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.

ChatGPT is fairly reliable.

>Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.

This is untrue in spirit.

> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.

Imagine if they’d done something else.

Imagine if they’d done anything else.

Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.

Imagine, because right now that’s the closest you’re going to fucking get.

This is what he said in 2024. He really thought ChatGPT is not in the future.

There are so many examples and its clear that he's not good faith and has consistently gotten the spirit wrong.


This guy sounds like an uninformed jackass.

Look at Gemini 3.1 Pro on the AA-Omniscience Index, which measures hallucinations. It's 30, previous best was 11.

https://artificialanalysis.ai/evaluations/omniscience

With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.


> With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.

I'm honestly not sure how this issue could be solved. Like, fundamentally LLMs are next (or N-forward) token predictors. They don't have any way (in and of themselves) to ground their token generations, and given that token N is dependent on all of tokens (1...n-1) then small discrepancies can easily spiral out of control.


To solve it doesn't mean we have to eliminate it completely. I think GPT has solved it to enough extent that it is reliable. You can't get it to easily hallucinate.

It depends on how much context is in the training data. I find that they make stuff up more in places where there isn't enough context (so more often in internal $work stuff).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: