This depends on so many variables, including complexity of the average problem, the obscurity of the average problem, having documentation or a knowledgebase with a known solution.
ChatGPT hallucinates so many things that don't exist, e.g., buttons or menu options in programs that don't exist, not considering file compatibility issues etc. I could go on for hours.
If the average problem is "I got locked out of my account" or basic and common stuff like this that just warrants giving someone a link or telling them to reboot their router, sure, maybe it'll be better than dealing with a human being in x out of y occurrences, hallucination not withstanding.
If it's something more complex like needing an NGINX configuration when a company has only ever considered Apache htaccess in the past, the customer is probably seven kinds of fucked. I wasted days trying to get something other than nonsense out of ChatGPT for an NGINX config, even going so far as to feed it documentation and the lines it would need for implementing URL rewrites. It kept hallucinating things that didn't exist in the documentation, and was a complete waste of time. Even after correcting it umpteen times, it still gave the same response. There's no reasoning applied.
Is there potential? Sure. But it isn't replacing human beings in a lot of cases. And for even more cases, just using a search function on a knowledgebase or search engine will yield more accurate information than trusting it isn't hallucinating. I have wasted far more time than I've saved due to hallucinations.
If all customer support does is search the problem on a knowledgebase, sure, it makes sense, because the support agent is already just doing what the LLM does if they can't apply logic and reasoning to the query. But why not just access the knowledgebase directly and not risk hallucinations?
Sure, it's not as good as a human rep, but it's still better than the dumb chatbots we are forced to interact with now. May be better than searching a poorly updated knowledgebase too.
It might be better than most reps considering not infrequently, the big corps hire the cheapest labour and quality of English is a major problem in my experiences. Especially if all the rep does is query something in a knowledgebase.
I disagree very strongly over the knowledgebase being inferior to the LLM data set.
The data sets of LLMs are not stored in a human-readable format. The data sets of LLMs are likely a worse means of storing data because the outputs are prone to hallucination, and if an output can only be rendered via an LLM itself, well, therein lies your problem. You can't trust that it's right and not hallucinating if you can't read the data it is using.
ChatGPT hallucinates so many things that don't exist, e.g., buttons or menu options in programs that don't exist, not considering file compatibility issues etc. I could go on for hours.
If the average problem is "I got locked out of my account" or basic and common stuff like this that just warrants giving someone a link or telling them to reboot their router, sure, maybe it'll be better than dealing with a human being in x out of y occurrences, hallucination not withstanding.
If it's something more complex like needing an NGINX configuration when a company has only ever considered Apache htaccess in the past, the customer is probably seven kinds of fucked. I wasted days trying to get something other than nonsense out of ChatGPT for an NGINX config, even going so far as to feed it documentation and the lines it would need for implementing URL rewrites. It kept hallucinating things that didn't exist in the documentation, and was a complete waste of time. Even after correcting it umpteen times, it still gave the same response. There's no reasoning applied.
Is there potential? Sure. But it isn't replacing human beings in a lot of cases. And for even more cases, just using a search function on a knowledgebase or search engine will yield more accurate information than trusting it isn't hallucinating. I have wasted far more time than I've saved due to hallucinations.
If all customer support does is search the problem on a knowledgebase, sure, it makes sense, because the support agent is already just doing what the LLM does if they can't apply logic and reasoning to the query. But why not just access the knowledgebase directly and not risk hallucinations?