Re: Trying to use gdb on OS X
- Posted by jaygade Jun 11, 2013
- 3086 views
It's already a problem on 64bit, since we have long doubles (or extended floating points) that could in theory hold larger integer values than a 64bit int. We need a way to do bitwise ops on these superlarge values...
Does Euphoria use long doubles or extended floats internally now?
Right now, a normal double can hold an integer with 52 bits of precision, and those 52 bits will fit into a 64-bit type. So what I'm doing is taking that double with 52 bits, converting it to a 64 bit integer, and keeping lowest 32 bits. Then we can perform bitwise operations on the number. Remember, this is all internal to the C backend routines -- no double in these cases should be outside the range of a 32-bit integer because that is how the backend is written. (Although I don't think there are any explicit checks for this.)
Going forward maybe we need explicit routines to handle real<-->integer conversions for users regardless of the underlying type or size. I see that the library has to_integer() which returns either 0 or a default value if the number is out of bounds of a Euphoria 29-bit integer.
Do any 32-bit targets not support some kind of int64_t?
Nothing modern that I can think of. Win32s maybe? Or really old 32bit DOS systems?
I think it's really a compiler function; the compiler knows enough about the underlying machine to know whether it can be done in the processor or with a routine (like the old pre-387 days). I was looking into it and I saw that djgpp supports long long. I guess it depends on how old of machines we want to support with new code, or else suggest users use existing older software with older machines.
If and when we work with more embedded architectures, we'll have to solve whatever other problems crop up then.
The question becomes, what do we do with input that is beyond that? It's very possible to get values too big for a 64-bit integer coming out of a euphoria atom that has some of its lowest 32 bits set. It's not really a common or particularly useful thing to do, but someone will do it and we should figure out how we're going to handle it. We could simply throw an error, or just say behavior is undefined when the value is too big.
Matt
How does Euphoria handle converting doubles too large for a Euphoria integer now? (Tested on Windows with 4.05...) It fails with a type check error. Should this behavior change?
Remember, this is all for internal conversions between an integral atom stored as a C double and one stored as a C integer used either as an integer (for bitwise ops) or as a pointer. Not for user type conversions between arbitrary doubles and integers (at this point).
I created the change because I compiled 4.0 on OS X, found that I was failing the *bits() routines in t_math.e, and tried to figure out why. I found that certain doubles were not being converted to unsigned longs correctly. Although I didn't realize it at the time, the same problem seems to be related to the ARM port failing the same tests, only I came up with a different solution.
Converting between (larger) doubles and (larger) integers and performing bitwise operations on them is certainly a worthwhile goal, but it wasn't the main thrust of my bugfixing.
I realize that this only applies to 32-bit versions of Euphoria, not necessarily to 64-bit versions, but it will have to be accounted for if there is one codebase which can be compiled to both.