Re: Casting int <--> double and signs
- Posted by jaygade Jul 17, 2013
- 1402 views
An object isn't a pointer (I'm talking about the C code now). Whatever its value is, before we use it as a pointer in C code, it gets cast to some pointer type. I don't see anything about what you said users should be able to do that shouldn't work in practice, assuming you got pointers that had a negative representation as a signed integer. Definitely won't happen in 64-bit. Not sure how often that might happen in 32-bit.
But an object can represent a pointer, an integer, a Euphoria integer, or a munged pointer to a DBL or a SEQ. We don't know until we test it.
And there are points when an object which represents a value outside of the range of a Euphoria integer (or outside of the range of an unsigned Euphoria integer) gets cast into a eudouble.
I see what you're saying, I have to think about this a bit more. The reason it's cast to unsigned when converting to a DBL_PTR or SEQ_PTR is because shifts of some signed values are either undefined or implementation-defined in C. C only cares if a pointer is signed if you are doing math on it.
Like I said, I need to wrap my head around this some more. Sometimes going through all the casts makes me think I'm working with LISP. But then, I've never worked with LISP.
What I really want to understand is what assumptions is the code making at any given point? Is the assumption correct? If it is not correct, then how to correct it?
That would give you an error, since there are no such things as poke2u or poke2s. Just poke2. Yes, it's converted automatically for you.
Heh -- I knew this, but forgot it when I didn't want to type in a 64-bit signed value for poke8. I was trying to think of a situation where an unsigned value would be assumed, but a signed value would be possible.
It probably was an integer originally. But this has obvious problems if you're translating 64-bit code with a 32-bit translator, which I think is why it was changed.
That makes sense. I'll take your advice and test it.