1. Remainder
- Posted by kbochert at copper.net Nov 08, 2003
- 716 views
remainder (100, .01) produced a result of .01 (.09999... before rounding) My proposed solution was to calculate X - floor (X/Y) * Y and it solved that particular case, but had exactly the same problem in other simple cases where X was a multiple of Y. The Intel fpu performs all internal operations on 80-bit values, so the division above produces a value that can be at most a single LSB off. If it is a bit off in the negative direction, then floor() 'quantifies' that error into an error of Y, the maximum error. A solution is to observe the bit pattern of the double generated by X/Y and correct the case where it is a single bit less than a whole number. In C: (and without sign stuff) double rem( double X, double Y) struct { double d; long L[2]; } z; z.d = X / Y; if (L[0] == -1) { L[0] = 0; ++L[1]; } ; return X - floor (z) * Y } (Basically, if X is a multiple of Y, then the error case division produces a double whose bit pattern looks something like: 0x0048FFFFFFFFFFFF ) This works very nicely on all cases I have tried where X is a multiple of Y, and at worst, decreases the accuracy of a very few remainders() ( 1 in 65k?) by a single bit. How do you do this in EU?? KtB
2. Re: Remainder
- Posted by Robert Craig <rds at RapidEuphoria.com> Nov 08, 2003
- 649 views
kbochert at copper.net wrote: > A solution is to observe the bit pattern of the double generated by X/Y > and correct the case where it is a single bit less than a whole number. > > In C: (and without sign stuff) > double rem( double X, double Y) > struct { > double d; > long L[2]; > } z; > z.d = X / Y; > if (L[0] == -1) { > L[0] = 0; > ++L[1]; > } ; > return X - floor (z) * Y > } > > (Basically, if X is a multiple of Y, then the error case division produces > a double whose bit pattern looks something like: > 0x0048FFFFFFFFFFFF ) > > This works very nicely on all cases I have tried where X is a multiple > of Y, and at worst, decreases the accuracy of a very few remainders() > ( 1 in 65k?) by a single bit. > > > How do you do this in EU?? You have my source. In runtime.c, Dremainder(), you can see that I just call the C fmod() function, which I assume executes an Intel instruction. I also have a cryptic comment "for now". I don't remember why I said that. Maybe I thought there was a speed optimization I could do. I never attempt to "fix" problems due to finite floating-point accuracy. Most languages that I'm aware of leave it to the programmer to handle situations like this. One exception is APL, which has a concept called "comparison tolerance" or "fuzz". In APL if two numbers are the same to about 13 digits (as I recall) then APL considers them to be exactly equal. The programmer can adjust this value up or down if he wants. This fuzz value pervades the whole APL system. I believe it also affects operations like floor and remainder, just like your idea above, but the differences can be much more than 1 bit. The fuzz factor idea solves a lot of problems, but you can have weird cases where for example A = B and B = C but A != C I don't think computer science types are happy with the idea. Regards, Rob Craig Rapid Deployment Software http://www.RapidEuphoria.com
3. Re: Remainder
- Posted by Pete Lomax <petelomax at blueyonder.co.uk> Nov 08, 2003
- 705 views
On Sat, 08 Nov 2003 10:17:44 -0800, kbochert at copper.net wrote: <snip> >How do you do this in EU?? HI Karl, Derek, I've just finished testing Derek's offering, and coded some 180 tests from the IBM hursley test suite. It passes (to within 1e-6) all the tests. I've also applied it to jbrown's fraclib routines, and they pass with 100% accuracy, no need for a fudge check. Now if only fraction types could be incorporated into the lanugage somehow... A copy of fraclib.e can be found on my website. Pete http://palacebuilders.pwp.blueyonder.co.uk/euphoria.html
4. Re: Remainder
- Posted by kbochert at copper.net Nov 08, 2003
- 760 views
On 8 Nov 2003 at 21:01, Pete Lomax wrote: > > > On Sat, 08 Nov 2003 10:17:44 -0800, kbochert at copper.net wrote: > > <snip> > > >How do you do this in EU?? > > HI Karl, Derek, > > I've just finished testing Derek's offering, and coded some 180 tests > from the IBM hursley test suite. It passes (to within 1e-6) all the > tests. I've also applied it to jbrown's fraclib routines, and they > pass with 100% accuracy, no need for a fudge check. Now if only > fraction types could be incorporated into the lanugage somehow... > > A copy of fraclib.e can be found on my website. > > Pete > http://palacebuilders.pwp.blueyonder.co.uk/euphoria.html > Without being able to understand all that code, I suspect that your use of fractions results in all remainders being done on integral arguments, and the standard fmod() works quite well there. I have only seen fmod()'s bad behavior occur with real, non-integral arguments that are multiples of one another (like: remainder (3.9, 1.3)) A really nice work-around!! (though I'm sure a fractions package has other reasons to exist) KtB
5. Re: Remainder
- Posted by kbochert at copper.net Nov 09, 2003
- 660 views
On 8 Nov 2003 at 15:06, Robert Craig wrote: > > > kbochert at copper.net wrote: > > A solution is to observe the bit pattern of the double generated by X/Y= > > and correct the case where it is a single bit less than a whole number.= > > > > In C: (and without sign stuff) > > double rem( double X, double Y) > > struct { > > double d; > > long L[2]; > > } z; > > z.d =3D X / Y; > > if (L[0] =3D=3D -1) { > > L[0] =3D 0; > > ++L[1]; > > } ; > > return X - floor (z) * Y > > } > > > > (Basically, if X is a multiple of Y, then the error case division produ= ces > > a double whose bit pattern looks something like: > > 0x0048FFFFFFFFFFFF ) > > > > This works very nicely on all cases I have tried where X is a multiple > > of Y, and at worst, decreases the accuracy of a very few remainders() > > ( 1 in 65k?) by a single bit. > > > > > > How do you do this in EU?? > > You have my source. In runtime.c, Dremainder(), > you can see that I just call the C fmod() function, > which I assume executes an Intel instruction. > I also have a cryptic comment "for now". I don't remember > why I said that. Maybe I thought there was a speed optimization > I could do. > > I never attempt to "fix" problems due to finite > floating-point accuracy. Most languages that I'm aware of > leave it to the programmer to handle situations like this. > > One exception is APL, which has a concept called > "comparison tolerance" or "fuzz". > In APL if two numbers are the same to about > 13 digits (as I recall) then APL considers them to be exactly equal. > The programmer can adjust this value up or down if he wants. > This fuzz value pervades the whole APL system. > I believe it also affects operations like floor and remainder, > just like your idea above, but the differences > can be much more than 1 bit. > > The fuzz factor idea solves a lot of problems, > but you can have weird cases where for example > A =3D B and B =3D C but A !=3D C > > I don't think computer science types > are happy with the idea. > > Regards, > Rob Craig > Rapid Deployment Software > http://www.RapidEuphoria.com > The only issue here seems to be that the modulo nature of the function, floor() in my code or the equivalent in the Intel 'repeated subtraction' algorithm, causes a situation in which an arbitrarily small inaccuracy can cause the result to 'toggle' to a very bad value. Intel's 80-bit internal precision causes this error to be a single bit. This method is a little different than a fuzz factor in that it only modifi= es a specific case. In rare circumstances, that modification can (I think) introduce a 1/2 bit error. Perhaps that 'for now' comment came about from seeing others problems with fmod(), and deciding that intel's algorithm was good enough, pending investigation. It should be pointed out that Intel does not actually supply a remainder function. There is an instruction FPREM (and FPREM1) that performs a partial remainder calculation. It is up to the user (the fmod() fct in th= e C library) to write code that compensates for this. Another interesting note from the manual is: "FPREM produces an exact result; the precision exception does not occur and the rounding control has no effect" My manual specifically mentions that an important use of FPREM is to reduce the arguments of periodic function like sine, and it provide status bits to aid that use. In that usage, this modulo problem is meaningless. It certainly looks like the standard C library provides a na=EFve implementation aimed at periodic functions. I can't prove that the algorithm above is 'correct', but it seems to very clearly produce better results. KtB