Re: Trying to use gdb on OS X
- Posted by jaygade Jun 09, 2013
- 3546 views
So here's where I'm at. For some reason, the computer isn't passing along (-1) or (#FFFFFFFF) correctly in the function call for and_bits() in be_runtime.c. It only seems to fail when combined with a larger integer integer that is promoted to an atom, like #DEADBEE5.
Here's my test program:
include std/convert.e include std/math.e printf(1, "1. and_bits(#3EADBEE5, -1): Exp: %x | Res: %x\n", {#3EADBEE5, and_bits(#3EADBEE5, -1)}) printf(1, "2. and_bits(#DEADBEE5, -1): Exp: %x | Res: %x\n", {#DEADBEE5, and_bits(#DEADBEE5, -1)}) printf(1, "3. and_bits(-1, #DEADBEE5): Exp: %x | Res: %x\n", {#DEADBEE5, and_bits(-1, #DEADBEE5)}) printf(1, "4. and_bits(-1, -1): Exp: %x | Res: %x\n", {-1, and_bits(-1, -1)}) printf(1, "5. and_bits(-1, #FFFFFFFF): Exp: %x | Res: %x\n", {-1, and_bits(-1, #FFFFFFFF)}) printf(1, "6. and_bits(#FFFFFFFF, #FFFFFFFF): Exp: %x | Res: %x\n", {#FFFFFFFF, and_bits(#FFFFFFFF, #FFFFFFFF)}) printf(1, "7. and_bits(3735928549, -1): Exp: %x | Res: %x\n", {3735928549, and_bits(3735928549, -1)}) printf(1, "8. and_bits(3735928549, #DEADBEE5): Exp: %x | Res: %x\n", {3735928549, and_bits(3735928549, #DEADBEE5)}) -- Fails tests 2, 3, 5, 7
1. and_bits(#3EADBEE5, -1): Exp: 3EADBEE5 | Res: 3EADBEE5 2. and_bits(#DEADBEE5, -1): Exp: DEADBEE5 | Res: 0 3. and_bits(-1, #DEADBEE5): Exp: DEADBEE5 | Res: 0 4. and_bits(-1, -1): Exp: FFFFFFFF | Res: FFFFFFFF 5. and_bits(-1, #FFFFFFFF): Exp: FFFFFFFF | Res: 0 6. and_bits(#FFFFFFFF, #FFFFFFFF): Exp: FFFFFFFF | Res: FFFFFFFF 7. and_bits(3735928549, -1): Exp: DEADBEE5 | Res: 0 8. and_bits(3735928549, #DEADBEE5): Exp: DEADBEE5 | Res: DEADBEE5
Using gdb, I've traced the program to be_runtime.c line 1761.
Dand_bits (a=0x804e20, b=0xbfffe220) at be_runtime.c:1761 1761 return and_bits( (unsigned long)(a->dbl), (unsigned long)(b->dbl));
At this point, a->dbl and b->dbl have the correct values (but in double format). The cast should take care of that and put both values on the stack for the and_bits() call.
(gdb) p/x (unsigned long)(a->dbl) $58 = 0xdeadbee5 (gdb) p/x (unsigned long)(b->dbl) $59 = 0xffffffff
I can tell gdb to return at that point and it works correctly, so the asm code must be being generated incorrectly. Now, my understanding of x86 assembler is even worse than my understanding of C, and my understanding of floating point and vector instructions is non-existent.
Here's the disassembly for line 1761:
0x002c1498 <Dand_bits+13>: mov 0xc(%ebp),%eax 0x002c149b <Dand_bits+16>: movsd (%eax),%xmm0 0x002c149f <Dand_bits+20>: movapd %xmm0,%xmm1 0x002c14a3 <Dand_bits+24>: lea 0x26de9(%ebx),%eax 0x002c14a9 <Dand_bits+30>: movapd (%eax),%xmm0 0x002c14ad <Dand_bits+34>: movapd %xmm0,%xmm2 0x002c14b1 <Dand_bits+38>: cmplesd %xmm1,%xmm2 0x002c14b6 <Dand_bits+43>: lea 0x26df9(%ebx),%eax 0x002c14bc <Dand_bits+49>: movapd (%eax),%xmm0 0x002c14c0 <Dand_bits+53>: minsd %xmm0,%xmm1 0x002c14c4 <Dand_bits+57>: xorpd %xmm0,%xmm0 0x002c14c8 <Dand_bits+61>: maxsd %xmm0,%xmm1 0x002c14cc <Dand_bits+65>: lea 0x26de9(%ebx),%eax 0x002c14d2 <Dand_bits+71>: movapd (%eax),%xmm0 0x002c14d6 <Dand_bits+75>: andpd %xmm2,%xmm0 0x002c14da <Dand_bits+79>: subpd %xmm0,%xmm1 0x002c14de <Dand_bits+83>: cvttpd2dq %xmm1,%xmm1 0x002c14e2 <Dand_bits+87>: movdqa %xmm2,%xmm0 0x002c14e6 <Dand_bits+91>: psllq $0x1f,%xmm0 0x002c14eb <Dand_bits+96>: pxor %xmm0,%xmm1 0x002c14ef <Dand_bits+100>: movd %xmm1,%edx 0x002c14f3 <Dand_bits+104>: mov 0x8(%ebp),%eax 0x002c14f6 <Dand_bits+107>: movsd (%eax),%xmm0 0x002c14fa <Dand_bits+111>: movapd %xmm0,%xmm1 0x002c14fe <Dand_bits+115>: lea 0x26de9(%ebx),%eax 0x002c1504 <Dand_bits+121>: movapd (%eax),%xmm0 0x002c1508 <Dand_bits+125>: movapd %xmm0,%xmm2 0x002c150c <Dand_bits+129>: cmplesd %xmm1,%xmm2 0x002c1511 <Dand_bits+134>: lea 0x26df9(%ebx),%eax 0x002c1517 <Dand_bits+140>: movapd (%eax),%xmm0 0x002c151b <Dand_bits+144>: minsd %xmm0,%xmm1 0x002c151f <Dand_bits+148>: xorpd %xmm0,%xmm0 0x002c1523 <Dand_bits+152>: maxsd %xmm0,%xmm1 0x002c1527 <Dand_bits+156>: lea 0x26de9(%ebx),%eax 0x002c152d <Dand_bits+162>: movapd (%eax),%xmm0 0x002c1531 <Dand_bits+166>: andpd %xmm2,%xmm0 0x002c1535 <Dand_bits+170>: subpd %xmm0,%xmm1 0x002c1539 <Dand_bits+174>: cvttpd2dq %xmm1,%xmm1 0x002c153d <Dand_bits+178>: movdqa %xmm2,%xmm0 0x002c1541 <Dand_bits+182>: psllq $0x1f,%xmm0 0x002c1546 <Dand_bits+187>: pxor %xmm0,%xmm1 0x002c154a <Dand_bits+191>: movd %xmm1,%eax 0x002c154e <Dand_bits+195>: mov %edx,0x4(%esp) 0x002c1552 <Dand_bits+199>: mov %eax,(%esp) 0x002c1555 <Dand_bits+202>: call 0x2c1423 <and_bits>
By the time that and_bits() gets called,
Breakpoint 4, and_bits (a=3735928549, b=0) at be_runtime.c:1754 1754 a = a & b;
b is equal to 0 when it should be 0xFFFFFFFF.
gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3)
I may try to remove the --ffast-math option and see what happens. I should also take a closer look at test 4 and figure out why it it passing.