“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
How is the getting stranded example different than asking on a travel forum how to get somewhere, and an active and well intentioned user that isn't familiar with your area of travel answers, gives you wrong instructions, and you get lost?
It's because we spent that last 50 years training people that computers are algorithmic, cold, and don't make human mistakes. Your calculator can't tell you the meaning of life, but it will never get 2 + 2 wrong.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
cunningham's law [0] [1] increases the likelihood that at least one other person will point out the error and correct it. chances are you'll probably get more than one person posting.
LLMs don't do this. they give confident language output, not correct answers.
Because the vast and overwhelmingly majority of the time, if you ask a question into the ether that nobody has a good answer to, most people will gloss over it and not bother answering, as attested by decades of relatable memes ( https://xkcd.com/979/ ). In contrast, the chatbot is trained to always attempt to give an answer, and is seemingly disincentivized via its training set to just shrug and say "I don't know, good luck fam".
It is supposed to indicate Microsoft cares only about money, which to me too, seems in the same league as microslop, i.e. mildly insulting but really not rude enough to be worth censoring.
And other insults are just words as well. It's the intention, history, connotation etc. behind words that give them meaning. M$ is meant as an insult, hence it's insulting. https://en.wiktionary.org/wiki/M$
You created it in minutes, I think the appropriate next step would be to ask another LLM to try to poke holes in it. It does not seem fair to ask security professionals to waste their time on this.
It doesn't run a similar prompt or the same prompt again and hopes for the best. If it doesn't work, the agent debugs based on the errors received. Have you tried using a coding agent recently?
reply