I'm reading Computer Systems: A Programmer's Perspective (Bryant & O'Hallaron). In chapter 2 figure 2.4 it shows the following code
#include <stdio.h>
typedef unsigned char *byte_pointer;
void show_bytes(byte_pointer start, int len){
int i;
for(i = 0; i < len; i++)
printf(" %.2x", start[i]);
printf("\n");
}
void show_int(int x){
show_bytes((byte_pointer) &x, sizeof(int));
}
void show_float(float x) {
show_bytes((byte_pointer) &x, sizeof(float));
}
void show_pointer(void *x){
show_bytes((byte_pointer) &x, sizeof(void *));
}
Figure 2.4 Code to print the byte representation of program objects. This code uses casting to circumvent the type system. Similar functions are easily defined for other data types
Then using the following method
void test_show_bytes(int val) {
int ival = val;
float fval = (float) ival;
int *pval = &ival;
show_int(ival);
show_float(fval);
show_pointer(pval);
}
According to the book, this test method should print the following on linux 64 if the value is 12345
39 30 00 00
00 e4 40 46
b8 11 e5 ff ff 7f 00 00
However when I run the code I get the following result
39 30 00 00
00 00 00 00
d8 d6 54 c3 fd 7f 00 00
I'm using gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04) The code that I'm running based on the examples from the book I'm pretty new to C, any ideas why my results are different?
#include <stdio.h>
typedef unsigned char *byte_pointer;
nt main(){
int val = 12345;
test_show_bytes(val);
}
void test_show_bytes(int val) {
int ival = val;
float fval = (float) ival;
int *pval = &ival;
show_int(ival);
show_float(fval);
show_pointer(pval);
}
void show_bytes(byte_pointer start, int len){
int i;
for(i = 0; i < len; i++)
printf(" %.2x", start[i]);
printf("\n");
}
void show_int(int x){
show_bytes((byte_pointer) &x, sizeof(int));
}
void show_float(float x) {
show_bytes((byte_pointer) &x, sizeof(float));
}
void show_pointer(void *x){
show_bytes((byte_pointer) &x, sizeof(void *));
}