While reading Hacking: The Art of Exploitation (a wonderful book!), I came across this function:
void binary_print(unsigned int value) {
unsigned int mask = 0xff000000; // Start with a mask for the highest byte.
unsigned int shift = 256*256*256; // Start with a shift for the highest byte.
unsigned int byte, byte_iterator, bit_iterator;
for(byte_iterator=0; byte_iterator < 4; byte_iterator++) {
byte = (value & mask) / shift; // Isolate each byte.
printf(" ");
for(bit_iterator=0; bit_iterator < 8; bit_iterator++) { // Print the byte's bits.
if(byte & 0x80) // If the highest bit in the byte isn't 0,
printf("1"); // print a 1.
else
printf("0"); // Otherwise, print a 0.
byte *= 2; // Move all the bits to the left by 1.
}
mask /= 256; // Move the bits in mask right by 8.
shift /= 256; // Move the bits in shift right by 8.
}
}
Here's a input-output table for the function:
= = = = = = = = = = = = = = = = = = = = = =
INPUT : OUTPUT
= = = = = = = = = = = = = = = = = = = = = =
0 : 00000000 00000000 00000000 00000000
2 : 00000000 00000000 00000000 00000010
1 : 00000000 00000000 00000000 00000001
1024 : 00000000 00000000 00000100 00000000
512 : 00000000 00000000 00000010 00000000
64 : 00000000 00000000 00000000 01000000
= = = = = = = = = = = = = = = = = = = = = =
By this I know that binary_print() converts decimal to binary.
But I don't understand how exactly the function finds the right answer. Specifically:
- What is mask? How did the author arrive at the value 0xff000000? (0xff000000 seems to close in towards 2^32, the maximum value of int in the system)
- What is shift? Why initialize it to 256^3? (To me, seems like it has something to do with the place weights in hexadecimal)
- What actually happens in these lines:
- byte = (value & mask)/shift
- byte & 0x80
In short, I would like to understand the method binary_print() uses to do the conversion.