First, I should point out that there's a lot of undefined behavior in this code, and it makes a lot of assumptions that aren't guaranteed by the standard. But that having been said, the question can be answered in terms of what you'll see on a typical implementation.
Let's assume that the data type sizes (in bytes) on the two machines are:
type |
32-bit size |
64-bit size |
long |
4 |
8 |
int |
4 |
4 |
short |
2 |
2 |
char |
1 |
1 |
Again, I don't think these are guaranteed, but they are very typical.
Now consider what the code is doing. First, it's setting
long *p = (long *)8;
short *q = (short *)0;
These addresses do not refer to valid data (at least, not portably), but the code is never actually deferencing them. All it's doing is using them to perform pointer arithmetic.
First, it increments them:
p = p + 1;
q = q + 1;
When adding an integer to a pointer, the integer is scaled by the size of the target data type. So in the case of p
, the scale is 4 (for 32-bit) or 8 (for 64-bit). In the case of q, the scale is 2 (in both cases).
So p
becomes 12 (for 32-bit) or 16 (for 64-bit), and q
becomes 2 (in both cases). When printed in hex, 12 is 0xc
and 16 is 0x10
, so this is consistent with what you saw.
Then it takes the differences between the two pointer, after first casting them to various different pointer types:
c = (char *)p - (char *)q;
i = (int *)p - (int *)q;
l = (long *)p - (long *)q;
These results exhibit undefined behavior, but if you assume that all pointer values are byte addresses, then here's what's happening.
First, when subtracting two pointers of the same type, the difference is divided by the size of the target data type. This gives the number of elements of that type separating the two pointers.
So in the case of c
(type char *
), it divides by 1, giving the raw pointer difference, which is (12 - 2) = 10 (for 32-bit) and (16 - 2) = 14 (for 64-bit).
For i
(type int *
), it divides by 4 (and truncates the result), so the difference is 10 / 4 = 2 (for 32-bit) and 14 / 4 = 3 (for 64-bit).
Finally, for l
(type long*
), it divides by 4 (for 32-bit) or 8 (for 64-bit), again truncating the result. So the difference is 10 / 4 = 2 (for 32-bit) and 14 / 8 = 1 (for 64-bit).
Again, the C standard does not guarantee any of this, so it is not safe to assume these results will be obtained on different platforms.