Hacker Newsnew | past | comments | ask | show | jobs | submit | tmnvdb's commentslogin

Citation needed. As far is I know this is simply false. Different sanctions have different goals. Regime change is very rarely a goal. Often it is to reduce economic growth to keep/make the country weak, or to achieve some other goal. See for example sactions on India, which are definitely not meant to overthrow the indian government.


Sometimes it's both.

"Your country is sanctioned because your government is being a global ass, wink-wink"

Implying that a change in government will lift the sanctions.


Did you read the article?


Yes, the points are valid but over-generalized. I've met many engineers that I would consider are "the best" which aren't whining remote prima donnas the article makes them all out to be.


If you train an LLM on chess, it will learn that too. You don't need to explain the rules, just feed it chess games, and it will stop making illegal moves at some point. It is a clear example of an inferred world model from training.

https://arxiv.org/abs/2501.17186

PS "Major commercial American LLM" is not very meaningful, you could be using GPT4o with that description.


> This is not a demonstration of a trick question.

It's a question that purposefully uses a limitation of the system. There are many such questions for humans. They are called trick questions. It is not that crazy to call it a trick question.

> This is a demonstration of a system that delusionally refuses to accept correction and correct its misunderstanding (which is a thing that is fundamental to their claim of intelligence through reasoning).

First, the word 'delusional' is strange here unless you believe we are talking about a sentient system. Second, you are just plain wrong. LLMs are not "unable to accept correction" at all, in fact they often accept incorrect corrections (sycophanty). In this case the model is simply unable to understand the correction (because of the nature of the tokenizer) and it is therefore 'correct' behaviour for it to insist on it's incorrect answer.

> Why would anyone believe these things can reason, that they are heading towards AGI, when halfway through a dialogue where you're trying to tell it that it is wrong it doubles down with a dementia-addled explanation about the two bs giving the word that extra bounce?

People believe the models can reason because they produce output consistent with reasoning. (That is not to say they are flawless or we have AGI in our hands.) If you don't agree, provide a definition of reasoning that the model does not meet.

> Why would you offer up an easy out for them like this? You're not the PR guy for the firm swimming in money paying million dollar bonuses off what increasingly looks, at a fundamental level, like castles in the sand. Why do the labour?

This, like many of your other messages, is rather obnoxious and dripping with performative indignation while adding little in the way of substance.


> No, it's the entire architecture of the model.

Wrong, it's an artifact of tokenizing. The model doesn't have access to the individual letters, only to the tokens. Reasoning models can usually do this task well - they can spell out the word in the reasoning buffer - the fact that GPT5 fails here is likely a result of it incorrectly answering the question with a non-reasoning version of the model.

> There's no real reasoning.

This seems like a meaningless statement unless you give a clear definition of "real" reasoning as opposed to other kinds of reasoning that are only apparant.

> It seems that reasoning is just a feedback loop on top of existing autocompletion.

The word "just" is doing a lot of work here - what exactly is your criticism here? The bitter lesson of the past years is that relatively simple architectures that scale with compute work surprisingly well.

> It's really disingenuous for the industry to call warming tokens for output, "reasoning," as if some autocomplete before more autocomplete is all we needed to solve the issue of consciousness.

Reasoning and consciousness are seperate concepts. If I showed the output of an LLM 'reasoning' (you can call it something else if you like) to somebody 10 years ago they would agree without any doubt that reasoning was taking place there. You are free to provide a definition of reasoning which an LLM does not meet of course - but it is not enough to just say it is so. Using the word autocomplete is rather meaningless name-calling.

> Edit: Letter frequency apparently has just become another scripted output, like doing arithmetic. LLMs don't have the ability to do this sort of work inherently, so they're trained to offload the task.

Not sure why this is bad. The implicit assumption seems to be that an LLM is only valueable if it literally does everything perfectly?

> Edit: This comment appears to be wildly upvoted and downvoted. If you have anything to add besides reactionary voting, please contribute to the discussion.

Probably because of the wild assertions, charged language, and rather superficial descriptions of actual mechanics.


These aren't wild assertions. I'm not using charged language.

> Reasoning and consciousness are seperate(sic) concepts

No, they're not. But, in tech, we seem to have a culture of severing the humanities for utilitarian purposes, but no, classical reasoning uses consciousness and awareness as elements of processing.

It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are. Which is to say, you don't know what reasoning is. So you'd think it was a meaningless statement.

Do computers think, or do they compute?

Is that a meaningless question to you? I'm sure given your position it's irrelevant and meaningless, surely.

And this sort of thinking is why we have people claiming software can think and reason.


> > > Reasoning and consciousness are seperate(sic) concepts

> No, they're not. But, in tech, we seem to have a culture of severing the humanities for utilitarian purposes [...] It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are.

As far as I'm aware, in philosophy they'd generally be considered different concepts with no consensus on whether or not one requires the other. I don't think it can be appealed to as if it's a settled matter.

Personally I think people put "learning", "reasoning", "memory", etc. on a bit too much of a pedestal. I'm fine with saying, for instance, that if something changes to refine its future behavior in response to its experiences (touch hot stove, get hurt, avoid in future) beyond the immediate/direct effect (withdrawing hand) then it can "learn" - even for small microorganisms.


You have again answered with your customary condescension. Is that really necessary? Everything you write is just dripping with patronizing superiority and combatative sarcasm.

> "classical reasoning uses consciousness and awareness as elements of processing"

They are not the _same_ concept then.

> It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are. Which is to say, you don't know what reasoning is. So you'd think it was a meaningless statement.

The problem is the only information we have is internal. So we may claim those things exist in us. But we have no way to establish if they are happening in another person, let alone in a computer.

> Do computers think, or do they compute?

Do humans think? How do you tell?


You use a lot of anthropomorphisms: doesn't "know" anything (does your hard drive know things? Is it relevant?), "making things up" is even more linked to continuous intent. Unless you believe the LLMs are sentient this is a strange choice of words.


I originally put quotes around "know" and somehow lost it in an edit.

I'm precisely trying to criticize the claims of AGI and intelligence. English is not my native language, so nuances might be wrong.

I used the word "makes-up" in the sense of "builds" or "constructs" and did not mean any intelligence there.


So similar to wikipedia


Similar to anything really. Can I really trust anything without verifying? Scientific journals?


It seems that on some level, you have to in order to not just constantly reflecting upon your thoughts and researching facts. Whether you trust a given source should surely depend upon its reputation regarding the validity of its claims.


I agree and by reputation you mean accuracy. We implicitly know not to judge anything as 100% true and implicitly apply skepticism towards sources - the skepticism is decided by our past experience with the sources.

Think of LLMs as the less accurate version of scientific journals.


Accuracy certainly does play a role, but this in itself is not sufficient for preventing an infinite regress – how does one determine the accuracy of a source if not by evaluating claims about the source, that themselves have sources that need to be checked for accuracy? Empirical inquiries are optimal but often very unpractical. Reputation is accuracy as imperfectly valued by society or specific social groups collectively.


Do you verify everything? When your wife puts food in front of you, do you make her take a bite off your plate first to check for poison?


A crucial difference between LLMs and people is that you can build a mutual trust relationship with a person.


Exactly - also, if I were 14 I'd be hyperventilating right now. I love your blog posts!


PPDF is a great book but hard to apply. I recommend looking at some Kanban literature. Classic in this space is Actionable Agile Metrics for Predictability.


It is precisely to reduce cycle time that we control queue size. It's also not entirely true that cycle time is purely lagging. Every day an item ages in your queue, you know the cycle time had increased by one day. Hence the advice to track item age to control cycle time.


If it's really important to have an accurate estimate for a large work package you are in trouble, there is no such thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: