Hacker Newsnew | past | comments | ask | show | jobs | submit | zeknife's commentslogin

I get the impression LLM agents are a bit like tamagochi but for tech bros.


This is nice but it looks so suspiciously AI-written how can I trust it? I could just ask ChatGPT for any of these things myself.


I'm sure you can find some formulations that are AI written. Because I've used AI for structuring content and developing site.

As I wrote somewhere else this is made with AI, not by AI.

Ive been singing and developing for years. I'm not the expert but using others. Also, anyone finding anything that looks remotely wrong, I'll happily receive the feedback and update.

And use chatgpt, but use it the same way. Be curious if it's correct.


Unfortunately, if you reveal that you use AI in your projects, you will instantly turn a segment of your readers against you, even if your project is objectively good.

I suspect a lot of people don't reveal that they use AI for this reason.


> I could just ask ChatGPT for any of these things myself.

You wouldn't know what to ask, unless you have expertise.

The question isn't whether an LLM was used, but the trustworthiness of the human(s) behind it. Why would you trust anything by an unknown person on the Internet?


Ruby has a similarly intuitive `3.times do ... end` syntax


go also has

    for range 5 { ... }


A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense


You must know people without egos. Humans are better at correcting their mistakes, but far worse at admitting them.

But yes, as an edge case handler humans still have an edge.


LLMs by contrast love to admit their mistakes and self-flagellate, and then go on to not correct them. Seems like a worse tradeoff.


It's true that the big public-facing chatbots love to admit to mistakes.

It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.


Not when your goal is to create ASI: Artificial Sycophant Intelligence


and this is why LLM is getting cooked

they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty


You must know better humans than I do.


At least until they spend some time with it


It also doesn't need to be good for anything to turn the world upside down, but it would be nice if it was


I see about 40 paragraphs?


I assume you're not very interested in the subject if you think synthesizers aren't real instruments


I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.


And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.


I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.


> I've never seen somebody leave their vehicle during active driving.

Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].

[0] https://en.wikipedia.org/wiki/Ghost_riding


How many images do you need? What are the use-cases that need a bunch of artificial yet photoreal images produced or altered without human supervision?


I think people still expect a lot of trial and error before getting a usable image. At 2 cents per pull of the slot machine lever, it would still take a while, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: