My questions stem from trying to use printf to log things when trying to build for multiple bit-depth platforms (32/64 for example).
A problem that keeps rearing its ugly head is trying to print ints on multiple architectures. On 32 bit it would be something like
printf(" my int: %d\n", myInt);
but on 64 bit, it would have to be changed to
print (" my int: %ld\n", (long)myInt);
I have two related questions:
My first thought was that when you tell printf to print a variable, giving it a format, it would look at the address of that variable and grab as many bytes as it needed for that format. This seemed like a big problem at first. For example if you had a variable myChar that was a char (1 byte), but used a format specifier of %d, that would tell printf to go to the address of myChar and grab the next 4 bytes to treat it like an int. If this were the case, it seems like printf would grab garbage date from neighboring variables (because it was grabbing 4 bytes, but the real value is only 1 byte). This appears to not be the case however. By using myChar and specifying %d, printf grabs 1 byte and then pads the upper 3 bytes with 0's. Is my understanding correct here?
If the above is true, is there any real harm in always promoting variables up to their largest values to avoid the types of problems seen in the 32/64 bit case. For example if you have a short variable myShort, and an int variable, myInt, is there any downside in printing them always as:
printf("myShort %ld", (long)myShort); printf("myInt %ld", (long)myInt);
Thanks for any clarification.