Hacker Newsnew | past | comments | ask | show | jobs | submit | PeterHolzwarth's commentslogin

>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.


> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.


Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.

Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.


> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

This effect may force companies to simply ban chatbots from certain conversation.


I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?


if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

give us all of the money, but also never trust our product.

our product will replace humans in your company, also, our product is dumb af.

subscribe to us because our product has all the answers, fast. also, never trust those answers.


The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.


Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.

> highly inaccurate authority.

The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.


AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.


The big issue remains that llms cannot know their response is not accurate, even after 'reading' a page with the correct info, it can still simply generate wrong data for you. With authority as it just read and there is a link so it is right.

Who decides what information is "accurate"?

My trust in what the experts say has declined drastically over the last 10 years.


It's a valid concern, but with a doctor giving bad advice there is accountability and there are legal consequences for malpractice. These LLM companies want to be able to act authoritatively without any of the responsibility. They can't have it both ways.

I don't mean just doctors giving bad advice. It comes from the top, too.

For example, I remember when eggs were bad for you. Now they're good for you. The amount of alcohol you can safely drink changes constantly. Not too long ago a glass of wine a day was good for you. I poisoned myself with margarine believing the government saying it was healthier than butter. Coffee cycles between being bad and good. Masks work, masks don't work. MJ is addictive, then not addictive, then addictive again. Prozac is safe, then not safe. Xanax, too.

And on and on.

BTW, everyone always knew that smoking was bad for you. My dad went to high school in the 1930s, and said the kids called cigarettes "coffin nails". It's hard to miss the coughing fits, and the black lungs in an autopsy. I remember in the 1960s seeing a smoker's lung in formaldehyde. It was completely black, with white cancerous blobs. I avoided cigarettes ever since.

The notion that people didn't know that cigs were bad until the 1960s is nonsense.


The different is that OpenAI have much deeper pockets.

I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".


To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.

Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

OpenAI _might plausibly_ be responsible for certain outputs.


Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.


A major difference is that it’s coming straight from the company. If you get bad advice on a forum, well, the forum just facilitated that interaction, your real beef is with the jackass you talked to. With ChatGPT, the jackass is owned and operated by the company itself.

The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.

fumi2026, I don't pretend to understand much of what you've described, but I'm fascinated by the vim, vigor, and what, tho I don't quite understand it all, is a bit of cheeky humor in your github page.

Tell us more!


Thanks for asking!

You nailed it with "vim and vigor." The core philosophy is a paradigm shift from Statistical Pattern Matching (current LLMs) to Analytical Mechanics as Optimization.

Instead of predicting tokens based on probability, I treat the thought process as a Quantum-MHD fluid flowing through a magnetic field of memories.

A sneak peek under the hood:

The Brain (Backend): Rust provides the architectural safety (the "laws of physics"), bridging via FFI to C++ to directly hammer CUDA kernels for training on rented A100.

The Body (Client): I treat the mobile app as a thin native client. I use SPM (iOS) and Gradle (Android) for performant native UIs, but the entire computational metaphysics engine is a shared Rust FFI backend. Same universe logic, different screens.

To make things official, I also just incorporated a US company solo from here in Japan. I figured if I'm going to skip the university entrance exam, I might as well build my own vessel to sail the global market.

It sounds like sci-fi, but strictly speaking, it’s just very aggressive matrix math optimized for a pocket device. Video demo is dropping soon!


I appreciate people who are confident in their unique, out-of-left-field approach to things!

However, there is value also in being able to demonstrate success. Perhaps you might consider doing both things: pursuing your novel ideas, but also finishing your exams! As the internet meme goes, "why not do both?" There certainly isn't any downside!



Oh the Voodoo2 - that legendary card is up there with the 1080 in terms of sheer "the future is now" impact. The 1080, of course, had the extra benefit of continuing to be a wonderful card for a truly gobsmacking number of years past when one would expect it to have been relegated to "no where near good enough."

And, I continue to be tremendously excited that Fabien is clearly working on the next Game Engine Black Book! I struggle with the conflict of wanting to save it all for when he releases his book vs following along in the short term with his blog posts!


The two rode on each-other's coat tails during their ascent. Lucas was happy to give a "yeah, I was totally thinking that!" when people would point out some classic hero stuff in his simple little wonderful space opera (not damning with faint praise here - Star Wars is a ridiculously wonderful film!).

And Campbell knew a good thing when he saw it, happy to agree that Lucas' film represented a hero's journey.

This was a time when Campbell's writing was entering broad pop consciousness and his speaking engagement schedule was starting to grow: the massive popularity of Star Wars was a great ship to catch a ride on.

People wanted to see a depth in Star Wars that caught Lucas off guard (remember that he just wanted to replicate the exciting, cliff-hanger kids serials of his 1950s childhood). He decided to go with it, saying it was all part of a big plan, "I have ten movies with their stories all plotted out" etc. The reality is he cobbled things together ad-hoc and kind of quickly, with no real overarching intent - something he only decades later finally admitted.

I feel for him: in his mind, he was just a nuts-and-bolts technology guy who loved the "how would I make that?" questions and work far far more than the story he had to come up with to tell. He freely admitted he hated writing. If he had it his way, he would have merely been the head of ILM, excitedly figuring out ways to use new technology to solve film making problems, but Star Wars blew up on him, becoming an over-the-top ultra-success.

The real connection between Lucas and Campbell was nearly non-existent, but it was a useful thing for each of them to strategically latch on to as their popularity began to rapidly grow.


If it wasn’t for THX-1138 you cynicism might be warranted. The other factor is that the simple matinees are just as tied to the hero’s journey as Star Wars. The hero’s journey is tied to stories from the beginning of storytelling. Lucas experienced his own hero’s journey in producing the movie.

Finally from what I know Cambell ended up living on Skywalker Ranch. I see no reason to minimize connection.


Campbell believed all stories were the Hero's Journey in some convoluted manner or another. Could tell him you tripped down the stairs, and he'd say something like, 'yes, but going down those stairs again would be you learning to conquor your fears, thus resulting in a more well rounded person.'

Or you could say 'I should stop drinking milk, because I'm somewhat intolerant' and he'd say, 'ahh, yes, you're in the middle of the hero's journey, on the precibus of learning to set your desires aside for the betterment of your health'

Any story with conflict becomes the hero's journey, and what stories worth telling don't have some kind of conflict. 'Proto-story' nonsense.


precipice, not 'precibus'

This is an interesting take. Did Lucas ever actually admit he didn’t know about *The Hero with a Thousand Faces* before he wrote Star Wars? My understanding is he read Campbell after the motorcycle accident, and then it became a big influence.

Either way, I wouldn’t be surprised if Campbell was the one making the connections—between Life of Milarepa (which, in my opinion, is the closest pre Campbell example of the hero’s journey to Campbell’s original framing) and The Wizard of Oz. Meaning the stories all have the parts of the journey but the Life Milarepa has a 1 to 1 correlation.


Can you share some links to substantiate these claims that lucas didn't have a clue as to what he was doing, and moreover hadn't been infuenced by Campbell? Because I've paid quite a lot of attention to both of them, and thats completely contrary to what I've understood. Moreover, the OP link and it's follow-on say otherwise.


I think it may be all summed up by Roy Amara's observation that "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."


I think this is the most-fitting one-liner right now.

The arguments going back and forth in these threads are truly a sight to behold. I don’t want to lean to any one side, but in 2025 I‘ve begun to respond to everyone who still argues that LLMs are only plagiarism machines, or are only better autocompletes, or are only good at remixing the past: Yes, correct!

And CPUs can only move zeros and ones.

This is likewise a very true statement. But look where having 0s and 1s shuffled around has brought us.

The ripple effects of a machine doing something very simple and near-meaningless, but doing it at high speed and again and again without getting tired, cannot be underestimated.

At the same time, here is Nobel Laureate Robert Solow, who famously, and at the time correctly, stated that "You can see the computer age everywhere but in the productivity statistics."

It took a while, but eventually, his statement became false.


The effects might be drastically different from what you would expect though. We’ve seen this with machine learning/AI again and again that what looks probable to work doesn’t work out and unexpected things work.


Just an innocent bystander here, so forgive me, but I think the flack you are getting is because you appear to be responding to claims that these tools will reinvent everything and introduce a new halcyon age of creation - when, at least on hacker news, and definitely in this thread, no one is really making such claims.

Put another way, and I hate to throw in the now over-used phrase, but I feel you may be responding to a strawman that doesn't much appear in the article or the discussion here: "Because these tools don't achieve a god-like level of novel perfection that no one is really promising here, I dismiss all this sorta crap."

Especially when I think you are also admitting that the technology is a fairly useful tool on its own merits - a stance which I believe represents the bulk of the feelings that supporters of the tech here on HN are describing.

I apologize if you feel I am putting unrepresentative words in your mouth, but this is the reading I am taking away from your comments.


This is why I don't by alcohol online for delivery: the delivery person is required by their company to scan my ID. Places I order from already know enough about me - they don't also need a copy of my identification.


I bought a rackmount case on ebay that for some reason got shipped "Adult signature required", which is seemingly for alcohol shipments. The Fedex delivery guy repeatedly pestered me to scan my ID. I had already shown him my ID, signed for the package, and had possession of it. But he didn't speak English and couldn't understand me telling him we were done, so he just kept repeating "scan" and shoving the terminal at me. He also kept trying to steal the package back from me as if it hadn't been delivered, and I had to keep getting very aggressive to make him back off. He then insisted I speak to his supervisor on his phone (the ones who are now unreachable when you have a problem). The supervisor then continued badgering me about their policies and threatening to call the police (I told him go right ahead). Eventually they did give up and leave. No police ever showed up, and Fedex continues to deliver to me just fine. What an all around dystopian nightmare, though.


>"He then insisted I speak to his supervisor on his phone (the ones who are now unreachable when you have a problem)"

Soon, a chat only chatbot accessible through a widget on their app. Or worse a phone you call, but it's just an LLM with a TTS wrapped around it.


I think much of it is just LLM/TTS at this point. But even well before then it was a call center agent who would put in a "ticket" for nobody to call you back.


Couldn't most of that encounter have been avoided by just walking back inside your home with your package and closing the door? I don't understand why you'd even want to engage with someone like that.


Maybe?

But first, that's generally not how I operate.

Second, just because I went back inside the house doesn't mean that the situation would magically be over - they'd still be outside, right? And I'd have to monitor them until they left.

Third, it seems doing that would have encouraged them to pigeonhole the situation into the usual problem of "package getting stolen" for which they presumably do call the police and frame the situation that way. The police coming would then make for an escalated situation which I would have to deal with. Heck with the way police often defer to the status quo of how businesses frame problems, they might have even insisted I follow Fedex's desired procedure of scanning my ID despite it being legally unnecessary.

One of the big problems here is companies deploying user-facing agents that can't even communicate in the common language. There is another driver whom I've tried a few times now simply to work with her to get packages delivered (eg I'll bring them in from the street because I'm in the middle of shoveling snow), but communication is needlessly difficult. I'm sure many of the destructionists are faced with similar frustrations and then go on to blame "illegals", as if purifying society will compensate for bad incentives. But as usual, it's actually the corpos pitting us against one another in a race to the bottom.


Interesting. I order alcohol online and have never had that happen.


In the UK all delivery apps [0][1][2] will prompt the driver/rider to ask for ID when the customer purchases age-restricted items like alcohol or cigarettes.

I am not sure why the apps don't get the customer to upload a photo of their ID once rather than get the delivery person to request it for every restricted order?

It wastes so much time:

https://www.youtube.com/watch?v=NYEC_ooaC5A

https://www.youtube.com/watch?v=ILzfEaSiYf4

[0] https://riders.deliveroo.co.uk/en/delivering-alcohol

[1] https://help.uber.com/en-GB/ubereats/restaurants/article/how...

[2] https://courier-help.just-eat.co.uk/hc/en-gb/articles/103290...


Well, yeah, they ask to see ID, but they don't scan it. (I'm in the US though.)


They certainly have those for radars! Although the reason for using them is different.

https://en.wikipedia.org/wiki/Corner_reflector


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: