Hacker Newsnew | past | comments | ask | show | jobs | submit | ragingregard's commentslogin

> This is such a naive, simplistic, distrusting and ultimately monastic perspective

This is such a disingenuous take on the article, there's nothing naive or simplistic about it, it's literally full of critical thought linking to more critical thought of other academic observers to what's happening at the educational level. The context in your reply implies you read at most the first 10% of the article.

The article flagged numerous issues with LLM application in the educational setting including

1) critical thinking skills, brain connectivity and memory recall are falling as usage rises, students are turning into operators and are not getting the cognitive development they would thru self-learning 2) Employment pressures have turned universities into credentialing institutions vs learning institutions, LLMs have accelerated these pressures significantly 3) Cognitive development is being sacrificed with long term implications on students 4) School admins are pushing LLM programs without consultation, as experiments instead of in partnership with faculty. Private industry style disruption.

The article does not oppose LLM as learning assistant, it does oppose it as the central tool to cognitive development, which is the opposite of what it accomplishes. The author argues universities should be primarily for cognitive development.

> Successful students will grow and flourish with these developments, and institutions of higher learning ought to as well.

Might as well work at OpenAI marketing with bold statements like that.


The core premise is decidedly naive and simplistic -- AI is used to cheat and students can't be trusted with it. This thesis is carried through the entirety of the article.


That's not the core premise of this article, go read the article to the end and don't use your LLM to summarize it.

The core premise is cognitive development of students is being impaired with long term implications for society without any care or thought by university admins and corporate operators.

It's disturbing when people comment on things they don't bother reading, literally aligning with the point the article is arguing, that critical thinking is decaying.


So you believe students don't use AI to cheat, and you are calling the OP naive?


That's an utterly hilarious straw man, a spin worthy of politics, and someone else would label, a tautological "cheat". Students "cheated" hundreds of years ago. Students "cheated" 25 years ago. They "cheat" now. You can make an argument that AI mechanizes "cheating" to such an extent that the impact is now catastrophic. I argue that the concern for "cheating", regardless of its scale, is far overblown and a fallacy to begin with. Graduation, or measurement of student ability, is a game, a simulation that does not test or foster cognitive development implicitly. Should universities become hermetic fortresses to buttress against these untold losses posed by AI? I think this is a deeply misguided approach. While I had been a professor myself for 8 years, and do somewhat value the ideal of The Liberal Arts Education, I think students are ultimately responsible for their own cognitive development. University students are primarily adults, not children and not prisoners. Credential provisions, and graduation (in the literal sense) of student populations, is an institutional practice to discard and evolve away from.


ChatGPT told them otherwise.

Seriously, you’re arguing with people who have severe mental illness. One loon downthread genuinely thinks this will transform these students into “genuises”


You can straw man all you like, I haven't used an LLM in a few days -- definitely not to summarize this article -- and what you claim is the central idea, is directly related to my claim. Its very easy to combine them directly: students intellectual development is going to be impaired by AI because they can't be trusted to use it critically. I disagree.


When AI tools make it easy to cruise through coursework without learning anything then many students will just choose to do that? Intellectual development requires strenuous work and if universities no longer make students strain then most won’t. I don’t understand why you think otherwise.


I’m not sure how you lived through the last decade and came to the conclusion that people aged 17-25 make rational decisions with novel technologies that have short term gain and long term (essentially hidden) negative side effects.


It seems that 10% of college students in the U.S. are younger than 18, or do not have adult status. The other 90% are adults and are trusted with voting, armed services participation and enjoy most other rights that adults have (with several obvious and notable exceptions -- car rental and legal controlled substance purchase etc.) Are you saying that these adults shouldn't be trusted to use AI? In the United States, and much of the world, we have drawn the line at 18. Are you advocating that AI use shouldn't be allowed until a later cutoff in adulthood? It is not at all definitively established what these "essentially hidden" negative side effects are, that you elude to, and if they actually exist.


Your argument seems overly reliant on the definition of an adult. What is an adult? Is it a measure of responsibility, mental maturity? Because I would wager the level of responsibility and mental maturity of the average 18 year old has been on the downtrend.

I’m not advocating for completely restricting access to AI for certain age groups. I’m pointing out that historically we have restricted prolonged interactions with certain stimuli that have shown to be damaging to cognitive development, and that we should make the same considerations here.

I think it’s hard to deny that younger generations have been negatively affected by the proliferation of social media engineered around fundamentally predatory algorithms. As have the older generations.


> You can straw man all you like

No one is misrepresenting your argument, it's well understood and being argued that it is false.

> students intellectual development is going to be impaired by AI because they can't be trusted to use it critically.

This debate is going nowhere so I'll end here. Your core premise is on trust and student autonomy, which is nonsense and not what the article tackles.

It argues LLM literally don't facilitate cognitive brain development and can actually impair it, irrelevant to how they are used so it's malpractice for university admins to adopt it as a learning tool in a setting where the primary goal should be cognitive development.

Student's are free to do as they please, it's their brain, money and life. Though I've never heard anyone argue they were their wisest in their teens and twenties as a student so the argument that students should be left unguided is also nonsense.


You said I didn't read the article. That is your weak and petty straw man. Very clearly.


Really appreciate your comment, literally the only comment in this long sub-thread that picked up on the nonsensical numbers fooker put out.

Realistically fooker's investment strategy into NDXT of 100k over 10 years would have produced around $2.5M depending on exact timing in the year and partitioning of that 100k. Way less than $15M nonsense. Also would have required extreme conviction alike the crypto types and completely counter how typically multi-million portfolios are managed (diversified).

Also, who needs 15M, at 5M net wealth there honestly is no reason to be working at a $200k/year job. You'll make way more after-tax income even assuming lousy 5% yearly return thru capital gains. Same story for 2.5M @ 10%.


Counting on 10% returns long term is way to aggressive especially when you have to consider sequence of returns risks during withdrawals. Even 5% is not conservative enough while you are in the withdrawal phase.

If I had $5 million of investments outside of my home, would I work? Maybe? My job is far from stressful, I work from home, I “retired” my wife over 5 years ago when she was 44 eight years into our marriage so she could pursue her passions and we could travel a lot.

All of my friends still work so what could I possibly do with my free time that I don’t do now? The only restrictions that not working would lift is that we could more easily spend an extended amount of time outside of US time zones.


> Counting on 10% returns long term is way to aggressive especially when you have to consider sequence of returns risks during withdrawals

Yes though the propose is not to retire. There's better things to do with your life (IMO) than work a standard 9-5 job for some corporation once you've accumulated sufficient wealth to have financial independence.

> Even 5% is not conservative enough while you are in the withdrawal phase.

Assuming you spend 5% per year. 5M@5% is 250k, 200k+ after tax for CG + eligible divs. That's a lot of money to spend every year, more than most families get to earn thru their labor yearly. Can be secured against downturns with higher 4%+ bond allocations too. 5M is financial independence for vast majority of households. 2.5M can be as well for many if their baseline spend remains at 100k.

> All of my friends still work so what could I possibly do with my free time that I don’t do now?

Financial independence provides vast opportunities for those with ideas but lacking time. There's a reason most businesses are pursued by those who have financial wealth on their side.


Says the user who didn't even read the article. The whole first half had nothing to do with the author's values. It was about the poor implementation of AI algorithms as it relates to creator functionality. Shitty AI creator workflow, automatic ad injection, blocking of viewers to combat ad blockers...

Useless reply.


The above article is not convincing at all.

Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation, just a blind faith that pricing by providers "covers all costs and more". Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for. LLMs excel in complex queries with complex and long output. Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).

Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.

Moving on.


> Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation

That's because it's coming at things from the other end: since we can't be sure exactly what companies are doing, we're just going to look at the actual market incentives and pricing available and try to work backwards from there. And to be fair, it also cites, for instance, deepseek's paper where they talk about what their power foot margins are on inference.

> just a blind faith that pricing by providers "covers all costs and more".

It's not blind faith. I think they make a really good argument for why the pricing by providers almost certainly does cover all the costs and more. Again, including citing white papers by some of those providers.

> Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for.

Those token estimates were for comparing to search pricing to establish whether — relative to other things on the market — LLMs were expensive, so obviously they wanted to choose something where the domain is similar to search. That wasn't for determining whether inference was profitable or not in itself, and has absolutely no bearing on that.

> Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).

Most open-source providers provide thinking tokens in the output. Just separated by some tokens so that UI and agent software can separate it out if they want to. I believe the number of thinking tokens that Claude and GPT-5 use can be known as well: https://www.augmentcode.com/blog/developers-are-choosing-old... typically, chain of thought tokens are also factored into API pricing in terms of what tokens you're charged for. So I have no idea what this point is supposed to mean.

> Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.

That doesn't mean that selling inference by subscription isn't profitable either! This is a common misunderstanding of how subscriptions work. With these AI inference subscriptions, your usage is capped to ensure that the company doesn't lose too much money on you. And then the goal is with the subscriptions that most people who have a subscription will end up on average using less inference than they paid for in order to pay for those who use more so that it will equal out. And that's assuming that the upper limit on the subscription usage is actually more costly than the subscription being paid itself, and that's a pretty big assumption.

If you want something that factors in subscriptions and also does the sort of first principles analysis you want, this is a good article:

https://martinalderson.com/posts/are-openai-and-anthropic-re...

And in my opinion it seems pretty clear that basically everyone who does any kind of analysis whether black box or first principles on this comes to the conclusion that you can very easily make money on inference. The only people coming to any other conclusion are those that just look at the finances of U.S. AI companies and draw conclusions on that without doing any kind of more detailed breakdown — exactly like the article you linked me, which now I have finally been able to read, thanks to someone posting the archive link, which isn't actually making any kind of case about the subscription or unit economics of token inference whatsoever, but is instead just basing its case on the massive overinvestment of specifically open AI into gigantic hyperscale data centers, which is unrelated to the specific economics of AI itself.


"Fuel efficiency on an ICE can drop up to 30% after 10 years"

Complete nonsense. Every 15+ year old ICE car I've known or owned was within 5% to 10% of original fuel economy, the reliable brands actually maintained their original fuel economy or surpassed it as fuel economy improves as engine wear in completes at the 20k-40k mile mark.

If your ICE car dropped by 30% then share the brand and your maintenance history.


"The average ICE car does not survive to 200,000 miles"

That's a fair amount of misinformation in your post.

1) Any reliable ICE brand goes well above 200k miles with basic maintenance. There's a long history here of reliability and why so many drivers choose boring yer reliable brands like Toyota, Honda, Mazda, etc. If you choose brands that do not prioritize reliability then that's on you. (i.e. Mercedes drivers switching into Tesla)

2) Mileage (distance) is not actually the determining factor here in longevity, car age is. Average age of ICE cars is around 12 years in the USA. That's average, which means there are many cars that are much much older than that. Battery cars will be lucky if they average out 8 years as a fleet. Probability is 75%+ you're looking at a battery replacement at the 12 year mark if not sooner. Vast majority of drivers will not replace said battery making the car a throw away due to cost (no one financially competent spends $10k-$20k on a battery for a car worth less than $10k). This will absolutely drive fleet age down, resulting in a younger fleet and more disposable cars. Replacement batteries are not plentiful or cheap and there's no reason for that to change due to the industry strategy.

"Tesla still averages being able to hold 90% of original charge"

3) Lucky you. It's well known in the community first year degradation is typically 5%-10% and there after 1-2% per year till a major failure. Do you know how to measure your original charge? Have you driven the car from 100% to 0% to verify total battery capacity or you just going of the BMS hoping it knows the true capacity. BMS is regularly off by 5%+ so for all you know your true capacity is already nearing 80%. If you know Lithium battery science then you know after 80% the capacity hits a cliff rate of degradation accelerates. Few people drive their cars below 10% battery so they don't really know.


What I hate are the dozens of little doodads in the car that fail after 10 years and are no longer manufactured. Mechanical knobs on the AC, servos in the mirrors, power windows, wiper motors, window seals, heck the shifter stalk on my 2006 forester broke (thank god for chop shops). I wonder what people are referring to when they say "this car won't last 200k miles". do they mean the engine, transmission, heater, what exactly?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: