1. RE: printf() on decimals
aku saya wrote:
> printf(1, "%.16g\n", 8999999999999990)
> printf(1, "%.16g\n", 9999999999999990)
>
> Why is the last line printed differently?
I think it's because you've gone over what 64-bit precision can handle.
A double floating point number (what Euphoria uses) has a 53-bit
mantissa (along with a 10-bit exponent and a sign bit). Therefore, the
largest number you can accurately represent is 9,007,199,254,740,990--a
number I'd call *just* (what's 7 billion among friends :) over 9
quadrillion (though I suppose it'd be 9 trillion for those who use the
British style numbering).
Try this
? 9999999999999990 - 9999999999999990
? 9999999999999991 - 9999999999999990
? 9999999999999992 - 9999999999999990
? 9999999999999993 - 9999999999999990
? 9999999999999994 - 9999999999999990
? 9999999999999995 - 9999999999999990
? 9999999999999996 - 9999999999999990
? 9999999999999997 - 9999999999999990
? 9999999999999998 - 9999999999999990
On W2K, I get:
Result Correct
0 0
2 1
2 2
2 3
4 4
6 5
6 6
6 7
8 8
What does Linux report?
"15 digits of accuracy" is really an estimate, since 10^15 != 2^53, but
the two are 'relatively' close (10^15 is only about a quadrillion more
than 2^53, or within about 10% :). As Rob said, it's probably the
difference between gcc's and Watcom's libraries, though one could say
that Watcom is being more honest about the result.
You shouldn't trust the precision of any results larger than 2^53. It's
just like considering that decimal places are rounded off.
Matt Lewis
2. RE: printf() on decimals
- Posted by eugtk at yahoo.com
Jul 01, 2003
--- Matt Lewis <matthewwalkerlewis at yahoo.com> wrote:
> Try this
> ? 9999999999999990 - 9999999999999990
> ? 9999999999999991 - 9999999999999990
> ? 9999999999999992 - 9999999999999990
> ? 9999999999999993 - 9999999999999990
> ? 9999999999999994 - 9999999999999990
> ? 9999999999999995 - 9999999999999990
> ? 9999999999999996 - 9999999999999990
> ? 9999999999999997 - 9999999999999990
> ? 9999999999999998 - 9999999999999990
>
> On W2K, I get:
> Result Correct
> 0 0
> 2 1
> 2 2
> 2 3
> 4 4
> 6 5
> 6 6
> 6 7
> 8 8
>
> What does Linux report?
Exactly the same:
0
2
2
2
4
6
6
6
8
Press Enter...
3. RE: printf() on decimals
- Posted by eugtk at yahoo.com
Jul 01, 2003
--- Matt Lewis <matthewwalkerlewis at yahoo.com> wrote:
> Try this
> ? 9999999999999990 - 9999999999999990
> ? 9999999999999991 - 9999999999999990
> ? 9999999999999992 - 9999999999999990
> ? 9999999999999993 - 9999999999999990
> ? 9999999999999994 - 9999999999999990
> ? 9999999999999995 - 9999999999999990
> ? 9999999999999996 - 9999999999999990
> ? 9999999999999997 - 9999999999999990
> ? 9999999999999998 - 9999999999999990
Hmmm. If I try this with Ruby, I get:
0
1
2
3
4
5
6
7
8
So I guess if you're interested in greater accuracy,
then that's the way to go.
Irv
4. RE: printf() on decimals
eugtk at yahoo.com wrote:
>
> Hmmm. If I try this with Ruby, I get:
> 0
> 1
> 2
> 3
> 4
> 5
> 6
> 7
> 8
>
> So I guess if you're interested in greater accuracy,
> then that's the way to go.
Yes, it looks like Ruby uses 64-bit integers (BIGNUMs) where it needs
to, so you'd still have precision for integers this big. AFAICT,
however, Ruby still uses 64-bit FP numbers for FLOATs.
Although, whenever this topic comes up, I start to wonder why people
want or need to use numbers with this sort of precision. Also, there
are several libs in the archives that can do this for you.
Matt Lewis
5. RE: printf() on decimals
- Posted by gertie at visionsix.com
Jul 01, 2003
On 1 Jul 2003, at 10:18, Matt Lewis wrote:
>=20
>=20
>=20
> aku saya wrote:
>=20
> > printf(1, "%.16g\n", 8999999999999990)
> > printf(1, "%.16g\n", 9999999999999990)
> >=20
> > Why is the last line printed differently?
>=20
> I think it's because you've gone over what 64-bit precision can handle.=
=20=20
> A double floating point number (what Euphoria uses) has a 53-bit=20
> mantissa (along with a 10-bit exponent and a sign bit). Therefore, the=
=20
> largest number you can accurately represent is 9,007,199,254,740,990--a=
=20
> number I'd call *just* (what's 7 billion among friends :) over 9=20
> quadrillion (though I suppose it'd be 9 trillion for those who use the=
=20
> British style numbering).
>=20
> Try this
> ? 9999999999999990 - 9999999999999990
> ? 9999999999999991 - 9999999999999990
> ? 9999999999999992 - 9999999999999990
> ? 9999999999999993 - 9999999999999990
> ? 9999999999999994 - 9999999999999990
> ? 9999999999999995 - 9999999999999990
> ? 9999999999999996 - 9999999999999990
> ? 9999999999999997 - 9999999999999990
> ? 9999999999999998 - 9999999999999990
>=20
> On W2K, I get:
> Result Correct
> 0 0
> 2 1
> 2 2
> 2 3
> 4 4
> 6 5
> 6 6
> 6 7
> 8 8
>=20
> What does Linux report?
>=20
> "15 digits of accuracy" is really an estimate, since 10^15 !=3D 2^53, but=
=20
> the two are 'relatively' close (10^15 is only about a quadrillion more=
=20
> than 2^53, or within about 10% :). As Rob said, it's probably the=20
> difference between gcc's and Watcom's libraries, though one could say=20
> that Watcom is being more honest about the result.
>=20
> You shouldn't trust the precision of any results larger than 2^53. It's=
=20
> just like considering that decimal places are rounded off.
This is why i was asking for (and i wrote) a string math lib, and so many=
=20
people poo-poo'd the idea. There are no reasonable limits to precision or d=
igit=20
count in string math. You can have 1000's of digits on both sides of the=
=20
decimal point, in any base you desire. I can't believe people are still gri=
peing=20
about the limits of the built-in math, and not using string math libs.
Kat,
noticing the developer of the C1 puter stopped talking to her when she gave=
=20
away the secrets for a much faster code morphing design. (well, no one irl=
=20
has done anything but laugh about it). S=E7=AE=EBw l=EEf=E8.