Re: bug in remainder ?
- Posted by Robert Craig <rds at ATTCANADA.NET> Feb 03, 2000
- 414 views
Daniel Johnson writes: > ? remainder(1, .1) > 0.1 > ? remainder(1, .01) > 0.01 > ? remainder(10, 1) > 0 (as he corrected himself later) The last case is obviously correct. The remainder of 10 divided by 1 is 0. The first two look wrong, but that's because we humans can represent numbers like .1 exactly in our brains, whereas Intel 64-bit floating-point has no bit pattern that corresponds exactly to .1 (or .01), so it uses something that's "very close to .1". The machine must figure that 10 is 9 times "very close to .1", plus "very close to .1". So the remainder must be "very close to .1", which looks like 0.1 when you print it, but isn't exactly 0.1. In the floating point case I simply call a C library routine to compute the result, so blame it all on C and Intel. The moral is: don't expect perfect accuracy in floating-point calculations. if 1 = .1+.1+.1+.1+.1+.1+.1+.1+.1+.1 then puts(1, "perfect floating-point\n") else puts(1, "fuzzy floating-point\n") end if Regards, Rob Craig Rapid Deployment Software http://www.RapidEuphoria.com