Hacker Newsnew | past | comments | ask | show | jobs | submit | teej's commentslogin

Depends how many 3090s you have

How many do you need to run inference for 1 user on a model like Opus 4.5?

8x 3090.

Actually better make it 8x 5090. Or 8x RTX PRO 6000.


How is there enough space in this world for all these GPUs

Just try calculating how many RTX 5090 GPUs by volume would fit in a rectangular bounding box of a small sedan car, and you will understand how.

Honda Civic (2026) sedan has 184.8” (L) × 70.9” (W) × 55.7” (H) dimensions for an exterior bounding box. Volume of that would be ~12,000 liters.

An RTX 5090 GPU is 304mm × 137mm, with roughly 40mm of thickness for a typical 2-slot reference/FE model. This would make the bounding box of ~1.67 liters.

Do the math, and you will discover that a single Honda Civic would be an equivalent of ~7,180 RTX 5090 GPUs by volume. And that’s a small sedan, which is significantly smaller than an average or a median car on the US roads.


What about what's around the GPU? Motherboard etc.

I didn’t do the napkin math on it earlier, because I don’t believe it really matters for making the point I was making.

I don’t care about looking up real numbers, so I will just overestimate heavily. Let’s say that for a large enough number of GPUs, the overhead of all the surrounding equipment would be around 20% (amortized).

So you can just take the number of GPUs I calculated in my previous comment, multiply by 0.8, and you get your answer.


This is not 20% , it's 100%+.

Now factor in power and cooling...

Don’t forget to lease out idle time to your neighbors for credits per 1M tokens…

Milk crates and fans, baby. Party like it’s 2012.

48x 3090’s actually.

None, if you have time to wait, and a bit of memory on the computer.

Is that the claim the OP is making?

The food pyramid was published by the department of agriculture, it’s always been propaganda.

Thanks to the dedicated work of Edward Bernays... nephew of Sigmund Freud ... and the Creel Committee

https://en.wikipedia.org/wiki/Committee_on_Public_Informatio...

When Lucky Strike needed more women to smoke cigarettes in the late 1920s, it turned to Bernays.


It doesn't sound like your firm does any diligence that would actually prevent you from buying a vendor that has security flaws.


Your coworkers were probably writing subtle bugs before AI too.


Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?


Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.


Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.


… while not having a real distinction between flies and non-fly ingredients.


No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.

You don’t get to fix bugs in code by simply pouring it through a filter.


I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.

Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.


Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.

Who are we speeding up, exactly?


Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.

It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.


pg_ system tables aren’t built for direct consumption. You typically have to massage them quite a bit to measure whatever statistic you need.


10% is a large drag on the cap table.


If that actually becomes material, they'll offer to buy shares in the next round. That's the point at which this whole conversation becomes interesting; right now, it's complexity for its own sake.

I know the feeling! I left a company some years back in a complicated way, and my instinct was to drill in as well. It seems like a big deal! It really isn't, though.


Or they'll find a way to dilute the co-founder's shares so they don't have to buy them out.


If that's going to happen, it's going to happen. I've heard as many stories of it happening as I've heard stories of people unhappy with the amount of liquidity they were able to achieve early in the life of a company that later became successful.


This is a joke right? Seed investors will get 10-30% of a company for under a million dollars which will be blown through in less than a year. That’s means they’re a drag on the cap table right?


What does that mean?


That theoretically future investors will be reluctant to invest because the founder 10% is crowding out equity that could otherwise be used to attract key performers down the line.


Nah, they can just issue more. He's already giving up 40% -- plenty of head room.


The exact details are unclear from the original post, but he definitely isn't giving up 40%. If they've only raised the pre-seed (a reasonable inference given the low valuation), then 10% ownership after 18 months points to two co-founders and a combined investor and option pool dilution of 20%. Anything is possible, of course, but unless the deal terms were very non-standard, this scenario makes the most sense.

You're right that 10% isn't necessarily a huge deal for investors, though. Early-round investor models target a specific ownership stake, and the company has to issue the same number of shares for that no matter what the composition of existing shareholders is.

The challenge with founders leaving is more psychological, like an early engineer who's vested a quarter of their 1% grant realizing that they still have to work hard for three years just to get a tenth of what the guy leaving already has. That's an easy way to suffocate the remaining team's motivation. Potential investors will (and should) look into it, but most of the time it's fine.


I'm not saying I agree with the concern, I'm just articulating what it is. I think the answer here is super simple: walk away with the 10% vested. (Also: stop thinking in terms of %).


Evals.

In this case, they could have QA'd the changes, they just didn't care.


There's zero danger writing Terraform. The danger is running `apply`.


whats the point of writing tf if you never mean to apply?


You apply with a human in the loop.


glp-1 inhibitors turn out to be better for alcohol and smoking cessation than any other drug we have. Truly a miracle drug.


But they aren't actually prescribed for these things (yet?), right?


Reports from friends in the EU is that doctors are prescribing it for all sorts of things off label. Autoimmune, ADHD, OCD etc. I take it for ME/CFS and it works well for me, but I get grey market supply. I’ve been on a low dose of it for a few years now, I was already researching it when it became a craze, the mass adoption really helped dial in dose safety. A small percentage of people like myself are super sensitive to it and need much lower doses


Very similar here - have autoimmune issue that it basically fixes, but am also super sensitive - never went above 2.5 which feels strong, often do half or less that.


Some people say it's an "off-label" use -- but that's up to the doctor and patient ultimately.

But I think that's okay -- because the main purpose is weight loss and reduction of caloric cravings. Alcohol is a caloric craving. So it should be fine.


That wasn't their main purpose as they were brought to market. Even the anti obesity is an off label use.

I don't think it would get me off alcohol though. I only drink to relax and be more social. Not because I enjoy it or the taste. But it means I drink rarely anyway.


I can tell you that being on it, food to me just tastes different.

So even if you didn't enjoy it, what typically happens to me anyway, is the off flavors get magnified.

Like the easiest one is Costco Chili. It tasted vastly different when I was off of it than on it. I taste more of the bitter notes. The complexity is more off putting to me.

I inject on say Sunday nights, and on Monday an IPA tastes terrible. But on Friday it tastes tolerable again. And this is a pretty consistent pattern I've noticed.


> Even the anti obesity is an off label use.

Off-label for Ozempic - the whole point of Wegovy is the “weight management” indication.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: