Hacker Newsnew | past | comments | ask | show | jobs | submit | vonneumannstan's commentslogin

So do you have to be a god tier Nobel Laureates to get this kind of gig where you just learn about a business and then offer random suggestions that might or might not help them and charge obscene fees for the privilege?

You definitely don't have to be god tier anything, you just need to know at least a little more than the companies you are consulting for.

This kind of work has been my primary income for the last 4 years or so. Nowhere near on the same level as Feynman, but I know enough about enough other things that I get a lot of reputational referrals.


>you just need to know at least a little more than the companies you are consulting for.

sometimes (i'd argue often, actually), you don't even need that. simply having an outside/fresh perspective and the fact that you aren't part of any of the existing groups/silos is valuable.


Often the most useful thing is just listening to the right people in the company. I wouldn't be 100% surprised if someone in the company in the story had already had the idea for the third electrode, but it took the suggestion from the high-paid consultant to get it taken seriously.

Probably true, but to get the job in the first place you probably need some sort of showy, impressive credentials.

I imagine you can also start by doing the same thing for a low cost, or for free. Find a local business that’s interested, give your advice, build reputation, repeat.

I think it's good to give out 1% as free advice.

You can be all over the ball park and people always get their money's worth :)

If you do it where the client can take it to the bank, they will often come back to see what happens with those kind of returns if they actually invest something substantial.


I think the story sounds fake because they listened to him.

Having the ideas is easy. Persuading and organization to change is not.

Perhaps it’s a cultural difference between the middle of the 20th century and now.


Often the highly paid consultants are there entirely to get the organisation to listen to the right ideas that already exist within the company.

That's basically what happened with Feynman's involvement in the Challenger enquiry, as he himself freely admitted in his memoir.

Why would a small company CEO hire a famous consultant only to ignore his suggestions? Absolutely not evidence of it being fake.

I’m not saying it is fake - I’m saying that’s the most absurd part.

Nope! There are consulting companies all over the place filled with bids and not filled with Nobel laureates!

Ergo...


None that offer that level of work life balance though…

Not really. Just need to be really good at your shit and cut through pointy-haired BS.

Its really hard to take people who say this seriously: "If you asked me six months ago what I thought of generative AI, I would have said that we’re seeing a lot of interesting movement, but the jury is out on whether it will be useful"

Like I'm sorry but if you couldn't see that this tech would be enormously useful for millions if not billions of people you really shouldn't be putting yourself out there opining on anything at all. Same vibes as the guys saying horseless carriages were useless and couldn't possibly do anything better than a horse which after all has its own mind. Just incredibly short sighted and lacking curiosity or creativity.


First car prototypes were useless and it has taken a few decades to have a good version. The first combustion engine was in 1826. Would you buy a prototype or a carriage for transportation at that time?

no, but AI isn't going to light on fire as I drive and potentially kill me. it's also not an exorbitant expense.

LLMs have convinced people to "light themselves on fire" as they drive the LLM. They're dead now.

If you couldn't foresee how they would eventually be useful with improvements over time you probably bought a lot of horse carriages in 1893 and appropriately lost your ass.

The problem is, that's only one way you could have lost your ass during the transition from horses to cars. I think that's what skydhash was getting at. There were hundreds of early car companies, all competing aggressively with one another, each with something to sell that the others seemed to lack. For every one that succeeded, dozens ended in bankruptcy, and not necessarily through any obvious fault of their own. You could be right about everything and still end up broke.

How to avoid being a Duryea, a Knox, a Marsh, a Maxwell-Briscoe, or a Pope-Toledo seems to be the real question.

Same thing pretty much happened in the early days of radio, with the addition of vicious patent wars. Which I'm sure we'll eventually see in the AI field, once the infinite money hose starts to dry up.


And even if AI becomes a staple of daily life, why the rush? No ones today learn how to drive cars on the Model T. I’ve never even used win 95 and earlier, and learned about Linux around 2010. Now I’ve been reading the latter’s source code.

This is exactly the end state of hiring via Leetcode.

>Far more terrifying is Big Tech having access to a closed version of the same models, in the hands of powerful people with a history of unethical behavior (i.e. Zuckerberg's "Dumb Fucks" comments).

Lol what exactly do you think Zuck would do with your voice, drain your bank account??


More likely sell your family ads while using your voice.

This is exactly wrong. At the highest levels you play your opponent, not only GTO. No one can play pure GTO and you exploit how your opponent moves off GTO.

Maybe if you have a lot of hands and so are confident they're deviating in some hand, otherwise you risk getting exploited yourself.

The players who study GTO instead of trying to win these meta mind games have proven to do very well in online heads up while the old-school mind games guys keep going boom and bust.


I dont think online represents the highest levels of the game...

Online high stakes heads up cash games is by far the most competitive field in all of poker, for several reasons (global competition, more hands per hour, very few fish, etc).

An online $100/$50 heads up match is probably equivalent to a $10k/$5k live game, in terms of the quality of pros you'll find grinding them.

What do you think represents the highest level of poker instead?


I don't think you've played much GTO poker if you believe this.

>his rejects any fixed, universal moral standards in favor of fluid, human-defined "practical wisdom" and "ethical motivation." Without objective anchors, "good values" become whatever Anthropic's team (or future cultural pressures) deem them to be at any given time.

Who gets to decide the set of concrete anchors that get embedded in the AI? You trust Anthropic to do it? The US Government? The Median Voter in Ohio?


90% weight to in-person exams without technology. 10% to quizzes or homework. You can't trust anything done outside of the classroom to accurately show competency. Problem solved.

Claude Cowork was apparently completely written by Claude Code. So this appears to yet again be a skill issue.

> apparently completely written by Claude Code

https://www.promptarmor.com/resources/claude-cowork-exfiltra...

> Claude Cowork Exfiltrates files

That explains it


OMG that's right, no human has ever written vulnerable code! Shut it down yall the AI thing is over!! This guy nailed it!

In this case the “vulnerability” if you can even call it that is so blindingly obvious that anyone who knows what a pen test is could’ve found it in a second. The only way this gets released in an otherwise-functional organization is going yolo mode with an LLM (or being willfully ignorant, or both).

>I would bet all of my assets of my life that AGI will not be seen in the lifetime of anyone reading this message right now. That includes anyone reading this message long after the lives of those reading it on its post date have ended.

By almost any definition available during the 90s GPT-5 Thinking/Pro would pretty much qualify. The idea that we are somehow not going to make any progress for the next century seems absurd. Do you have any actual justification for why you believe this? Every lab is saying they see a clear path to improving capabilities and theres been nothing shown by any research I'm aware of to justify doubting that.


The fact is that no matter how "advanced" AI seems to get, it always falls short and does not satisfy what we think of as true AI. It's always a case of "it's going to get better", and it's been said like this for decades now. People have been predicting AGI for a lot longer than the time I predict we will not attain it.

LLMs are cool and fun and impressive (and can be dangerous), but they are not any form of AGI -- they satisfy the "artificial", and that's about it.

GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.


>GPT by any definition of AGI is not AGI. You are ignoring the word "general" in AGI. GPT is extremely niche in what it does.

Definitions in the 90s basically required passing the Turing Test which was probably passed by GPT3.5. Current definitions are too broad but something like 'better than the average human at most tasks' seems to be basically passed by say GPT5, definitions like 'better than all humans at all tasks' or 'better than all humans at all economically useful tasks' are closer to Superintelligence.


The Turing Test was never about AGI.


That's pretty much exactly what Alan Turing made the Turing test for. From the Wikipedia entry:

> The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human.

> The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think?'"

> This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think".


Cherry-picking is not a meaningful contribution to this discussion. You are ignoring the entire section on that page called “Weaknesses”.


Cherry-picking? You made a completely factually wrong statement. There was no cherry-picking. You said the Turing test was never about AGI. You didn't say it has weaknesses. Even if it were the worst test ever made, it was still about AGI.

Ignoring the entire article including the "Strengths" section and only looking at "Weaknesses" is the only cherry-picking happening.

And if you read the Weaknesses section, you'll see very little of it is relevant to whether the Turing test demonstrates AGI. Only 1 of the 9 subsections is related to this. The other weaknesses listed include that intelligent entities may still fail the Turing test, that if the entity tested remains silent there is no way to evaluate it, and that making AI that imitates humans well may lower wages for humans.


They have to say that, or there'll be a loud sucking sound and hundreds of billions in capital will be withdrawn overnight


Ok that's great do you have evidence suggesting scaling is actually plateauing or that capabilities of GPT6 and Claude 4.5 Opus won't be better than models now?


You are suggesting, in your reference to scaling, that this is a game of quantity. It is not.


You can make this bet functional if you really believe it, which you of course really don't. If you actually do then I can introduce you to some people happy to take your money in perpetuity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: