Or is that "last bit ambiguity" caused by something other?
It's a result of rounding errors in the FPU *hardware* ... with a value such as 22/7 ... there is no exact binary representation so the least significant bit is not predictable... it might be a 1 or a 0 ... It might vary from compiler to compiler and it might vary from FPU to FPU ... (eg. Intel and AMD might give different answers as might Pelles and MinGW).
With float values things like 16/2 could give you 8, 7.99999999, 8.00000001 for example. Doubles are better because of their higher bit resolutions, but the same issue is still there.
Go back to my little test program and run it...
// demonstration of floating point inaccuracies
#include <math.h>
#include <stdio.h>
int main (void)
{ int i;
float x = 1.0;
for (i = 0; i < 100; i++)
{ printf("%f\t",x);
x += 0.1; }
return 0; }
... change the float to a double and you should see that it gives you correct answers up to 10 But these are very small values... as the numbers get bigger or have more decimal points the more likely you will see the error.