Re: Trying to use gdb on OS X

new topic     » goto parent     » topic index » view thread      » older message » newer message

I think I can guarantee the reason is because the compiler is using SSE* instructions for the conversion.

When I compile and run this the program size is the same, but the instructions are different -- it uses the normal 387 math instructions for the conversion:

/* test-dbl-ll32.c */ 
#include <stdio.h>  
  
void convert_double( double d ){  
	long l = (long) (long long) d;  
	unsigned long u = (unsigned long) (long long) d;  
	printf("%g -> %lx %lx\n", d, l, u );  
}  
  
int main(){  
	convert_double( (double) -1 );  
	convert_double( (double) 0xdeadbeef );  
	return 0;  
}  

Results:

jason$ ./test-dbl-ll32 
-1 -> ffffffff ffffffff 
3.73593e+09 -> deadbeef deadbeef 

When I compile Matt's original code with either -march=i386 or -mfpmath=387:

jason$ ./test-dbl 
-1 -> ffffffff ffffffff 
3.73593e+09 -> 80000000 deadbeef 

Interesting: with -march-i386, the code uses only 387 instructions. With -mfpmath=387, I still see some SSE* instructions in the disassembly.

I'm going to post this now, but out of curiosity I will try compiling test-dbl-ll32 with -mfpmath=sse2.

Edit: the -msse and -msse2 code are identical, and they both use 387 instructions. smile Both are identical to the version with only -m32.

Edit2: Compiling Matt's original code with -mnosse works, but is subtly different from -march=i386.

While this should be tested on ARM to make sure, I am still currently of the opinion that casting double to long long (or int64_t) prior to casting to unsigned long (or whatever portable type) is the correct thing to do for all platforms, and that eliminating an #ifdef and eliminating a separate function to convert doubles to int is the best solution.

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu