I think that unless you work on a 100000-1000000 GPU research cluster, you don't know what's currently possible.
I wish I could know the kinds of queries that could be answered when there are no economic constraints on existing infrastructure. That would suggest whether they have already hit a scientific wall (and that's the difference between it being a $10T+ industry or a 500B industry). On consumer LLMs, it's still easy to get the LLM to admit queries are beyond its abilities, although many of those questions are also beyond 99.9999% of humanity, to be fair (in that the things I ask don't exist yet anywhere and possibly will never due to their non-trivial engineering nature).
I wish I could know the kinds of queries that could be answered when there are no economic constraints on existing infrastructure. That would suggest whether they have already hit a scientific wall (and that's the difference between it being a $10T+ industry or a 500B industry). On consumer LLMs, it's still easy to get the LLM to admit queries are beyond its abilities, although many of those questions are also beyond 99.9999% of humanity, to be fair (in that the things I ask don't exist yet anywhere and possibly will never due to their non-trivial engineering nature).