I wonder if it was 5*64 bits that got mangled in editing. If 256 bits is sufficient for most of their code, I could see there being corner cases that need a few more bits but moving to 512 bits would be overkill.
It's actually not a typo. Our "real" internal code starts with integer bounds on the inputs (say 2^26) and then computes for each subexpression how many bits are actually needed to exactly represent that. That can even lead to fractional bits (like in "a + b + c"). The generated code then rounds up to the next 64 bit multiple.
reply