I asked GPT for rules on 101-level French grammar. That should be well documented for someone learning from English, no? The answers were so consistently wrong that it seemed intentional. Absolutely nothing novel asked of it. It could have quoted verbatim if it wanted to be lazy. I can't think of an easier question to give an LLM. If it's possible to "prompt wrong" a simple task that my six-year old nephew could easily do, the burden of proof is not on the people denying LLM intelligence, it's on the boosters.
> the burden of proof is not on the people denying LLM intelligence, it's on the boosters
It's an impossible burden to prove. We can't even prove that any other human has sentience or is reasoning, we just evaluate the outcomes.
One day the argument you're putting forward will be irrelevant, or good for theoretical discussion only. In practice I'm certain that machines will achieve human level output at some point.
I can't cite this directly, but according to hearsay from people who know billionaires: if the world saw how these people actually live when they're out of sight of the public, people would revolt. The idea that they live in our world is a facade.
Honestly you can't get much out of GPT-666* except the most boilerplate sigils, and then you run the risk of cross-imbuement and well, now you got demons. Do you want demons? Because that's how you get demons.
I've quite improved my results by telling it to purify and circumambulate its ritual space a few times in my user prompt. I've also been dabbling with reasoning, but so far what feels like 80% of sessions get possessed within 2 reasoning steps.
A friend of mine, asked ChatGPT to answer in paradoxes, if ChatGPT was running in a simulation, or are we running in a simulation. It was quite confused at first.
This makes me want to try Gemini! Honest, accurate criticism is incredibly valuable. I would value a friend or coworker willing to tell me this sort of thing.
reply