I think the point is significant figures, rather than decimal places: so 123,456,789 would be stored as 123,457,000 with six significant figures even without a decimal point coming along to upset everyone.
Of course the vagaries of the various floating point storage systems where assorted other things are crammed into that single-precision 32-bit word length can result in a potentially difficult-to-predict number of significant figures. I've generally shied away from using floating point numbers in programming stuff I do myself, but that's easy for me to say as I mostly do systems stuff rather than complicated and awkward civil engineering projects. Which means I never got to play on something like a Cray*, but on the plus side I never had to use floating point numbers.
* but I did get to play on weird 36-bit PDP-10s, which I think are just as interesting.
Edit: there's also BCD numbers: "binary coded decimal", where a decimal digit would be stored in a binary 4-digit unit: not very efficient in terms of storage but it did allow for definite and exact representation of decimal values. At the cost of involving COBOL, which is the oldest and worst programming language.