I am trying to set the bits in a double (IEEE Standard 754). Saying I want to 'build' a 3, I would set the 51-th and the 62-nd bit of the double floating point representation, so that I get in binary 1.1 * 2 that in decimal is 3. I wrote this simple main:
int main() {
double t;
uint64_t *i = reinterpret_cast<uint64_t*>(&t);
uint64_t one = 1;
*i = ((one << 51) | (one << 62));
std::cout << sizeof(uint64_t) << " " << sizeof(uint64_t*) << " "
<< sizeof(double) << " " << sizeof(double*) << std::endl;
std::cout << t << std::endl;
return 0;
}
The output of this would be
8 8 8 8
3
when compiling with g++4.3 and no optimization. However, I get a strange behavior if I add the -O2 or -O3 optimization flags. That is, if I just leave the main as it is, I get the same output. But if I delete the line that outputs the 4 sizeof, than I get the output
0
The unoptimized version without the output of the sizeof returns 3 as well, correctly.
So I am wondering whether this is a bug of the optimizer, or if I am doing something wrong here.