RE: printf() on decimals
aku saya wrote:
> printf(1, "%.16g\n", 8999999999999990)
> printf(1, "%.16g\n", 9999999999999990)
>
> Why is the last line printed differently?
I think it's because you've gone over what 64-bit precision can handle.
A double floating point number (what Euphoria uses) has a 53-bit
mantissa (along with a 10-bit exponent and a sign bit). Therefore, the
largest number you can accurately represent is 9,007,199,254,740,990--a
number I'd call *just* (what's 7 billion among friends :) over 9
quadrillion (though I suppose it'd be 9 trillion for those who use the
British style numbering).
Try this
? 9999999999999990 - 9999999999999990
? 9999999999999991 - 9999999999999990
? 9999999999999992 - 9999999999999990
? 9999999999999993 - 9999999999999990
? 9999999999999994 - 9999999999999990
? 9999999999999995 - 9999999999999990
? 9999999999999996 - 9999999999999990
? 9999999999999997 - 9999999999999990
? 9999999999999998 - 9999999999999990
On W2K, I get:
Result Correct
0 0
2 1
2 2
2 3
4 4
6 5
6 6
6 7
8 8
What does Linux report?
"15 digits of accuracy" is really an estimate, since 10^15 != 2^53, but
the two are 'relatively' close (10^15 is only about a quadrillion more
than 2^53, or within about 10% :). As Rob said, it's probably the
difference between gcc's and Watcom's libraries, though one could say
that Watcom is being more honest about the result.
You shouldn't trust the precision of any results larger than 2^53. It's
just like considering that decimal places are rounded off.
Matt Lewis
|
Not Categorized, Please Help
|
|