I read through that and none of the section (or entire work) ever talk about the above discussion. Further I looked at some of the many citations of on that section and none of them suggest that the OP is right. In fact a few of them I know disagree.
Depends. Models are matrices of floats and so there's little chance an umbrella-term like "stochastic parrot" will never not stick, even when they already show signs of syntactic, semantic world-building capability (https://www.arxiv-vanity.com/papers/2206.07682/). If you are like me (and them: https://archive.is/cZi83) and deem instruction following, chain-of-thought prompting, computational properties of LLMs (as researchers continue to experiment with training, memory, modality, and scaling, for example, to arrive at abstract reasoning) as emergent, then we're on the same page.
Okay so just to confirm that section doesn't actually tell us anything about this and in fact this is all based on your own understanding of the mechanisms involved.