"Campbell, Greg" <greg.campbell@us.michelin.com> writes:
> I've disassociated floats and exactness, that is floating point
> representations and exact matches do not seem to go together.
The issue is that "float" types actually means fractions encoded in
base 2 for efficiency reasons. Almost every time you go back and forth
between base 2 and base 10 you have to round, there is no exact
mapping between those two spaces.
For instance you can not write 1/3 (one third) in base 10 whereas you
can in base 3 using just a couple of digits (it's just "0.1")
> The idea was made more profound when I started looking into the
> multitude of options in representing a float in 16, 32 or 64
> bits. There are so many different ways to allocate bits for the
> number, and bits for the exponent, leading to radically different
> precisions.
Actually on today's hardware I thought it was hard to find anything
else than IEEE754 32 and 64 bits floats, standardized across all
platforms, and 32 bits values being a subset of 64 bits. So that does
not look like "many different ways" to me. Could you detail?
> Between a value
> on the server and a value on the client a difference out in the 15th
> decimal place hardly seems surprising.
Whether conversions and roundings happen on the server on or the
client does not seem to change the problem much IMHO.