Can all of LLMs be summarized as a (currently) really expensive search that allows you to express nuanced queries and script the output of the search? Why or why not?
Take code. In a sense, StackOverflow is about finding a code snippet for an already solved problem. Auto complete does the same kind of search in a sense.
Take generative text. In a sense that’s the equivalent of making a query and then aggregating the many results into one phrase. You could imagine the bot searching 1,000 websites and then taking the average of the answers to the query and then outputting the result.
Does every LLM use case fit the following pattern?…
query —-> LLM does its work —-> result —> script of result (optional)
LLMs do not work that way. LLMs do not have a conception of facts. Any query you make to an LLM has an output. The quality of that output depends on the training data. For high probability output you might think the LLM is returning the correct 'facts'. For low probability output you might think the LLM is hallucinating.
LLMs are not search. They are a fundamentally different thing from search. Most code is 100% deterministic. The program is executed exactly in order. LLMs are not 100% deterministic.
reply