Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Im not talking about this being the "best maze solver" and "better at solving mazes than humans". Im saying the model is "intelligent enough" to solve a maze.

And what Im really saying is that we need to stop moving the goal post on what "intelligence" is for these models, and start moving the goal post on what "intelligence" actually _is_. The models are giving us an existential crisis on not only what it might mean to _be_ intelligent, but also how it might actually work in our own brains. Im not saying the current models are skynet, but Im saying I think theres going to be a lot learned by reverse engineering the current generation of models to really dig into how they are encoding things internally.





> Im saying the model is "intelligent enough" to solve a maze.

And I don't agree. I think that at best the model is "intelligent enough to use a tool that can solve mazes" (which is an entirely different thing) and at worst it is no different than a circus horse that "can do math". Being able to repeat more tricks and being able to select which trick to execute based on the expected reward is not a measure of intelligence.


I would encourage you to read the code it produced. Its not like a simple "solve maze" function. There are plenty of "smart" choices in there to achieve the goal given my very vague instructions, and as a result of it analyzing why it failed at first and then adjusting.

I don't know how else to get my point across: what I am trying to say is that there is nothing "smart" about an automaton that needs to resort to A* algorithm implementations to "solve" a problem that any 4-year old child can solve just by looking at it.

Where you are seeing "intelligence" and "an existential crisis", I see "a huge pattern-matching system with an ever increasing vocabulary".

LLM's are useful. They will certainly cause a lot of disruption of automation on all types of white-collar work. They will definitely lead to all sorts of economic and social disruptions (good and bad). I'm definitely not ignoring them as just another fad... but none of that depends on LLMs being "intelligent" in any way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: