To some degree I think that our widely used formal languages may just be insufficient and could be improved to better describe change.
But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.
Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.
I believe the formalisms can be constructed by something fuzzy. Humans are fuzzy; they create imperefect formalisms that work until they break, and then they're abandoned or adapted.
I don't see how LLMs are significantly different. I don't think the formalisms are an "other". I believe they could be tools, both leveraged and maintained by the LLM, in much the same way as most software engineers, when faced with a tricky problem that is amenable to brute force computation, will write up a quick script to answer it rather than try and work it out by hand.
I think AI could do this in principle but I haven't seen a convincing demonstration or argument that Transformer based LLMs can do it.
I believe what makes the current Transformer based systems different to humans is that they cannot reliably decide to simulate a deterministic machine while linking the individual steps and the outcomes of that application to the expectations and goals that live in the fuzzy parts of our cognitive system. They cannot think about why the outcome is undesirable and what the smallest possible change would be to make it work.
When we ask them to do things like that, they can do _something_, but it is clearly based on having learned how people talk about it rather than actually applying the formalism themselves. That's why their performance drops off a cliff as soon as the learned patterns get too sparse (I'm sure there's a better term for this that any LLM would be able to tell you :)
Before developing new formalisms you first have to be able to reason properly. Reasoning requires two things. Being able to learn a formalism without examples. And keeping track of the state of a handful of variables while deterministically applying transformation rules.
The fact that the reasoning performance of LLMs drops off a cliff after a number of steps tells me that they are not really reasoning. The 1000th rules based transformation only depending on the previous state of the system should not be more difficult or error prone than the first one, because every step _is_ the first one in a sense. There is no such cliff-edge for humans.
But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.
Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.