Maybe they are just calling the jobs by different names? It seems like names of roles are constantly shifting. "Data scientist" is a term that is going out of fashion.
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."
I find it nearly impossible to avoid divisive content online. There are so many cool things in the world, but I can't find them because my timelines are all flooded with culture war. I wish I could find a platform that would listen to me when I say "show fewer posts like this."
What is their business purpose for releasing an open-weights model? How does it help them? I asked an LLM but it just said vague unconvincing things about ecosystem plays and fights for talent.
reply