Some programmers are good at remembering large numbers of arbitrary rules. Others aren't, and I've worked with good and bad programmers of both kinds.
So yes, I'm worried that a colleague might read the code and not know what the precedence is, and they will have to waste their time looking it up (thankfully some IDEs now have an command to add parentheses quickly, but it's still a distraction from their actual task).
Pretty much all languages have some features that are more confusing than helpful, and good codebases avoid using those features (whether via formal policy or not). IMO most precedence rules fall into that category; it would be better if e.g. "a && b || c" were a syntax error until bracketed properly.
If the code currently works, then they can read it, and infer that whatever precedence the operators have is the correct one for producing the result the code produces. If "a + b * c" is producing 17 where (a=2,b=3,c=5), then you know that your language makes multiplication precede addition.
If the code doesn't currently work, then they'll have to figure out via some external method (looking up the original formula used in the code, say) what the precedence needs to be, in order to parenthesize to make it work.
On a separate note,
> it would be better if e.g. "a && b || c" were a syntax error until bracketed properly.
this reminds me of the horribly-confusing practice of using "a && b || c" to mean "a ? b : c" in shell-scripting. It almost works, too... unless (a=true,b=false), in which case you unintentionally get the side-effects of c.
>If the code currently works, then they can read it, and infer that whatever precedence the operators have is the correct one for producing the result the code produces. If "a + b * c" is producing 17 where (a=2,b=3,c=5), then you know that your language makes multiplication precede addition.
By that logic why bother using a font in which * and + look like different symbols? Heck, why read the code at all? If the code is working you can infer what it must be doing by observing what comes out when you feed it different inputs.
You read code precisely because you don't know what it does for every input, or don't know how it implements the algorithm; you want to be able to look at a line and see what it does, without having to fire up a repl and run through several examples. I mean, the idea that code should be readable - i.e. that you should be able to tell what a given line of code does without having to run it or look it up - is about as fundamental a good coding principle as it gets.
When you read "a + b * c"--and then test your assumption of what it does in a REPL--the result is a learning moment where that knowledge now sticks to you; from then on, you know which of the two operators come first. You only have to do it once.
On the other hand, "using a font in which * and + look like [the same symbol]" means never being able to recognize the pattern, which means never learning anything and having to check every time.
Also,
> the idea that code should be readable - i.e. that you should be able to tell what a given line of code does without having to run it or look it up - is about as fundamental a good coding principle as it gets.
I would agree that that is a good coding principle for low-level C/C++/Java code; in these languages, you can't separate the abstract meaning of code from its implementation, so it's better to just keep the two things together.
But on the other hand, the equivalent coding principle for Lisp is "create a set of macros which form a DSL to perfectly articulate your problem domain--and then specify your solution in that DSL." The equivalent for Haskell is "find a Mathematical domain isomorphic to your problem domain; import a set of new operators which match the known Mathematical syntax of that domain; and then state your problem in terms of that Mathematical domain by using those operators." In either of these cases, nobody can really be expected to just jump in and read the code without looking something, or a lot of somethings, up.
This isn't because the code is "objectively bad", but rather that it has externalized abstractions which in a lower-level language like C would have to be written explicitly into the boilerplate-structure of your code. These are just two different cultures.
>When you read "a + b * c"--and then test your assumption of what it does in a REPL--the result is a learning moment where that knowledge now sticks to you; from then on, you know which of the two operators come first. You only have to do it once.
If you're good at memorizing essentially arbitrary rules then you only have to do it once. For +/* you can argue that the precedence is standard in the domain language (mathematics), but in C-like languages there are often a dozen operators with their order, and there's no natural reason why >> should be higher or lower than /.
>The equivalent for Haskell is "find a Mathematical domain isomorphic to your problem domain; import a set of new operators which match the known Mathematical syntax of that domain; and then state your problem in terms of that Mathematical domain by using those operators." In either of these cases, nobody can really be expected to just jump in and read the code without looking something, or a lot of somethings, up.
Sure - code is written in the language of the domain. If you don't know what an interest rate calculation is then even the best-written implementation won't be readable to you. But that domain terminology should make sense (indeed a large part of understanding a field is understanding its terminology), whereas many language precedence rules don't - they're completely arbitrary, there's no way to derive them from first principles if you forget.
I'd argue that a language where you don't have to memorize a precedence list (such as Lisp - the precedence is always explicit from the syntax, one simply can't write (+ a b * c - so I'm kind of surprised you mention it). I'm surprised if Haskell is different in this regard) is, all other things being equal, better than a language where you do have to memorize a precedence list. But rather than having to write a whole new language, it's more lightweight to form a "dialect" by declaring "we will write C (or whatever), but only use constructs that do not require memorizing the precedence table".
So yes, I'm worried that a colleague might read the code and not know what the precedence is, and they will have to waste their time looking it up (thankfully some IDEs now have an command to add parentheses quickly, but it's still a distraction from their actual task).
Pretty much all languages have some features that are more confusing than helpful, and good codebases avoid using those features (whether via formal policy or not). IMO most precedence rules fall into that category; it would be better if e.g. "a && b || c" were a syntax error until bracketed properly.