It's just God for them basically, they project the solutions to their fears into it. Eg. fear of death, in religion we have heaven etc. In AI they believe it will multiply their lifespan with some magic.
This is my experience, which is why I stopped altogether.
I think I'm better off developing a broad knowledge of design patterns and learning the codebases I work with in intricate, painstaking detail as opposed to trying to "go fast" with LLMs.
It's the evergreen tradeoff between the short and long terms. Do I get the nugget of information I need right now but lose in a month, or do I spend the time and energy that leads to deeper understanding and years-long retention of the knowledge?
There is something about our biology that makes us learn better when we struggle. There are many concepts on this dynamic: generation effect, testing effect, spacing effect, desirable difficulties, productive failure...it all converges on the same phenomenon where the easier it is to learn, the worse we learn.
Take K-12 for instance. As computing technology is further and further integrated into education, cognitive performance decreases in a near-linear relationship. Gen Z is famously the first generation to perform worse in every cognitive measure than previous generations, for as long as we've been recording since the 19th century. An uncomfortable truth emerging from studies on electronics usage in schools is that it isn't just the phones driving this. It's more so the Duolingo effect of software overall emulating the sensation of learning without actually changing the brain state. Because the software that actually challenges you is not as engaging or enjoyable.
How you learn, and your ability to parse, infer, and derive meaning from large bodies of information, is increasingly a differentiator in both the personal and professional worlds. It's even more so the case when many of your peers are now learning through LLM-generated summaries averaging just 300 words, perhaps skimming outputs around 1,000 words in length for "important information". The immediate benefits are obvious, but the cost of outsourcing that cognitive work gets lost in the convenience.
Because remember, this isn't just about your ability to recall specific regex, follow a syntax convention, or how much code you ship in an hour. Your brain needs exercise, and deep learning is one of the most reliable ways to get it. Doubly true if you're not even writing your own class names.
What I am speaking to is not far away or hypothetical, either. Because as of 2023, one in four young adults in the United States is functionally illiterate.
Effective learning and memorizing is actually at the narrow edge of struggling: it's neither "too easy" nor "too hard and painful". SRS systems do a very good job of tuning this: by the time a question comes back to you it will feel difficult, but you'll be able to recall the information and answer it with some effort. It's a matter of recognizing this feeling and acknowledging as "the right kind of effort" as opposed to a hopeless task.
If you ask the AI "please quiz me about the proper understanding of issues x y z and tell me if I got it all right. iterate for anything I get seriously wrong, then provide a summary at the end and generate SRS cards for me to train on" it will generally do a remarkably good job at that.
I agree and to address this I’ve tried using them to understand large code bases, I haven’t worked out how to prompt this effectively yet. Has anyone gone this route?
I'm friends with one of the Anthropic founders for over 15 years now, and I just find this line of thinking so sad. They are not manipulative fear mongering people, they're actually very decent people who you might consider listening to.
If that were true, they wouldn't publish hype results that then turn out to be completely unsubstantiated. Remember the "agents built a web browser"? I can't personally judge your friend as I don't know him. But the company is consistently lying about how good their product is in order to hype it up.
I don't talk to said friend about their work, so I genuinely have no insight here, but if I were a betting man, I'd bet what they have internally is considerably disparate from what is currently available in their consumer product.
The stuff they have internally might be slightly better than what they have now lmao. You have to super dense to believe otherwise.
Also I don't need the Anthropic ghouls telling me what I can or can not ask their stupid bot. At least Elon doesn't play this sad censorship game where you cannot say "boob" to it without it locking down.
Yeah and my dad works at Nintendo. If they want us to listen they really need to stop with releasing all the bullshit and over exaggeration what their chat bot does. And stop freaking whining about "MUHHH CHINA". Those ghouls stole almost all the books in the world, I hope China steals all from them and keeps releasing the free models.
Well, if your dad isn't one of the founder of Nintendo your point is moot. Given I was on the founding team of digitalocean as head of strategy till the IPO, and one of the founders of anthropic is a former tech journalist who covered my startups, maybe my friend is a founder of Anthropic?! Sorry your dad didn't work anywhere cool tho. :(
Who cares about these journalists. My point is that Amodei is a complete ghoul and loves fear mongering normies and being racist towards Chinese while HIS company stole all the damn books in the world. I hope the Chinese steal all their data and keep doing public models. This guy can't even figure a solution for his balding head let alone making an AGI lmao. But let Anthropic keep tricking midwits along. Elon has 100x the backbone that these fraudsters have btw.
For God's sake these guys are selling the doubling of human life span to some desperate elderly investors. Really going for people's deepest fears there. Oh yes just invest in us so you can get double the life span and don't have to die!
Alright well I can tell you're grumpy about this so how about we agree to disagree? I don't know Dario so I couldn't say, but I do trust Jack a lot. That aside: this is the 3rd time I've heard the racist towards Chinese thing, what exactly is that all about if you'd be willing to save me a google?
Exactly, it's the Lex Fridman gambit: a reputation for asking safe questions to powerful people tends to snowball because "safe, popular interview platform" is something they are all looking to self-promote on.
If you want to see the mask slip, watch Lex's interview with Zelensky.
> Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there.
That's... a strategy. Matter of time before an AI companion company succeeds with this by finetuning one of the open-source offerings. Cynically I'm sure there are at least a few VC backed startups already trying this
Cynically I think Anthropic is on the bleeding edge of this sort of fine-tuned manipulation.
Also If I worked for one of these firms I would ensure that executives and people with elevated status receive higher quality/more expensive inference than the peons. Impress the bosses to keep the big contracts rolling in, and then cheap out on the day-to-day.
It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.
reply