Re: Trying to use gdb on OS X

new topic     » goto parent     » topic index » view thread      » older message » newer message

Should I create a ticket? I don't want this issue to get lost either.

mattlewis said...

The number stored in a double doesn't necessarily fit into a 64-bit integer. Remember, part of the double precision floating point value is an exponent.

Yes, I know this, and I thought that I had addressed it a couple of times unless I'm missing some other point.

Currently 32-bit Euphoria has 29-bit integers, and it can perform bitwise operations on integers up to 32 bits in size. Bitwise operations on integers larger than 32 bits produce incorrect results.

Integers which are between 30 and 32 bits in size are stored as doubles. In order to preserve all 32 bits, the C backend has to cast the double to a C integer (long or unsigned long), perform the operation, and then store it back to a double.

The C standard says that casting a double to a long that is too big to fit into the long is undefined, and that casting double that is too big or is negative to an unsigned long is undefined.

There was a bug in the Euphoria C backend which relied upon this undefined behavior working a certain way. The fact that it worked until now is why it wasn't caught until we started moving into ARM and OS X platforms. The bug manifested itself as incorrect results from bitwise operations when the arguments were negative or were larger than 29 bits.

At this point I am not trying to address other behavior or larger values, although some very good points have been raised. Changing the D*bits() functions to work on the full range of integers that can be accurately represented is an interesting, but separate, problem.

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu