1
int x=25,i; 
float *p=(float *)&x;
printf("%f\n",*p);

I understand that bit representation for floating point numbers and int are different, but no matter what value I store, the answer is always 0.000000. Shouldn't it be some other value depending on the floating point representation?

  • Read about [_undefined behaviour_](https://en.wikipedia.org/wiki/Undefined_behaviour). – too honest for this site Jul 09 '15 at 18:16
  • Why don't you try it in the other direction to see how a small floatimg point number is represented. (And/Or do some research at Wikipedia.) – rici Jul 09 '15 at 18:17
  • The value is 0.000000000000000000000000000000000000000000035, which displays as 0.000000 because you used the %f format. You can use [this web site](http://www.h-schmidt.net/FloatConverter/IEEE754.html) to try out other values. – Raymond Chen Jul 09 '15 at 18:19
  • I'm voting to close this question as off-topic because it seeks to explain UB. – Martin James Jul 09 '15 at 21:14
  • Also, it's a multi-dup, and was asked recently at least once. http://stackoverflow.com/questions/17898186/unexpected-output-of-printf – Martin James Jul 09 '15 at 21:18

2 Answers2

6

Your code has undefined behavior -- but it will most likely behave as you expect, as long as the size and alignment of types int and float are compatible.

By using the "%f" format to print *p, you're losing a lot of information.

Try this:

#include <stdio.h>
int main(void) {
        int x = 25; 
        float *p = (float*)&x;
        printf("%g\n", *p);
        return 0;
}

On my system (and probably on yours), it prints:

3.50325e-44

The int value 25 has zeros in most of its high-order bits. Those bits are probably in the same place as the exponent field of type float -- resulting in a very small number.

Look up IEEE floating-point representation for more information. Byte order is going to be an issue. (And don't do this kind of thing in real code unless you have a very good reason.)

As rici suggests in a comment, a better way to learn about floating-point representation is to start with a floating-point value, convert it to an unsigned integer of the same size, and display the integer value in hexadecimal. For example:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

void show(float f) {
    unsigned int rep;
    memcpy(&rep, &f, sizeof rep);
    printf("%g --> 0x%08x\n", f, rep);
}


int main(void) {
    if (sizeof (float) != sizeof (unsigned int)) {
        fprintf(stderr, "Size mismatch\n");
        exit(EXIT_FAILURE);
    }
    show(0.0);
    show(1.0);
    show(1.0/3.0);
    show(-12.34e5);
    return 0;
}
Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
0

For the purposes of this discussion, we're going to assume both int and float are 32 bits wide. We're also going to assume IEEE-754 floats.

Floating point values are represented as sign * βexp * signficand. For 32-bit binary floats, β is 2, the exponent exp ranges from -126 to 127, and the significand is a normalized binary fraction, such that there is a single leading non-zero bit before the radix point. For example, the binary integer representation of 25 is

110012

while the binary floating point representation of 25.0 would be:

1.10012 * 24 // normalized 

The IEEE-754 encoding for a 32-bit float is

s eeeeeeee fffffffffffffffffffffff

where s denotes the sign bit, e denotes the exponent bits, and f denotes the significand (fraction) bits. The exponent is encoded using "excess 127" notation, meaning an exponent value of 127 (011111112) represents 0, while 1 (000000012) represents -126 and 254 (111111102) represents 127. The leading bit of the significand is not explicitly stored, so 25.0 would be encoded as

0 10000011 10010000000000000000000 // exponent 131-127 = 4

However, what happens when you map the bit pattern for the 32-bit integer value 25 onto a 32-bit floating point format? We wind up with the following:

0 00000000 00000000000000000011001

It turns out that in IEEE-754 floats, exponent value 000000002 is reserved for representing 0.0 and subnormal (or denormal) numbers. A subnormal number is number close to 0 that can't be represented as 1.??? * 2exp, because the exponent would have to be smaller than what we can encode in 8 bits. Such numbers are interpreted as 0.??? * 2-126, with as many leading 0s as necessary.

In this case, it adds up to 0.000000000000000000110012 * 2-126, which gives us 3.50325 * 10-44.

You'll have to map large integer values (in excess of 224) to see anything other than 0 out to a bunch of decimal places. And, like Keith says, this is all undefined behavior anyway.

John Bode
  • 119,563
  • 19
  • 122
  • 198