4
#include <stdio.h>

union NumericType
{
    float value;
    int intvalue;
}Values;

int main()
{
    Values.value = 1094795585.00;
    printf("%f \n",Values.value);
    return 0;
}

This program outputs as :

1094795648.000000 

Can anybody explain Why is this happening? Why did the value of the float Values.value increase? Or am I missing something here?

timrau
  • 22,578
  • 4
  • 51
  • 64
RubyDubee
  • 2,426
  • 2
  • 23
  • 34
  • 1
    possible duplicate of [Difference between float and double](http://stackoverflow.com/questions/2386772/difference-between-float-and-double) – Ben Voigt Aug 05 '10 at 20:56
  • 3
    @Ben Voigt: I don't believe this is a duplicate of that question. Questioner is not asking what the difference between float and double is---he's trying to understand behavior that falls out from representation error in any finite-size type. – Stephen Canon Aug 05 '10 at 21:15
  • please edit title and tag of your question, this has nothing to do with the fact that you are using the `float` inside a union. – Jens Gustedt Aug 05 '10 at 22:09
  • It's definitely not a new question. If you think a different question better captures the spirit then by all means provide it. – Ben Voigt Aug 06 '10 at 02:16
  • Is there a compiler option to omit warnings for floating point literals that cannot be expressed as written? – R.. GitHub STOP HELPING ICE Aug 06 '10 at 06:59
  • Completely insane output with `short`!! `short x = 32768; printf("%d\n", x);` gives `-32768`!! – R.. GitHub STOP HELPING ICE Aug 06 '10 at 09:41
  • [Is floating point math broken?](http://stackoverflow.com/q/588004/995714) – phuclv Jun 29 '15 at 11:24

7 Answers7

28

First off, this has nothing whatsoever to do with the use of a union.

Now, suppose you write:

int x = 1.5;
printf("%d\n", x);

what will happen? 1.5 is not an integer value, so it gets converted to an integer (by truncation) and x so actually gets the value 1, which is exactly what is printed.

The exact same thing is happening in your example.

float x = 1094795585.0;
printf("%f\n", x);

1094795585.0 is not representable as a single precision floating-point number, so it gets converted to a representable value. This happens via rounding. The two closest values are:

1094795520 (0x41414100) -- closest `float` smaller than your number
1094795585 (0x41414141) -- your number
1094795648 (0x41414180) -- closest `float` larger than your number

Because your number is slightly closer to the larger value (this is somewhat easier to see if you look at the hexadecimal representation), it rounds to that value, so that is the value stored in x, and that is the value that is printed.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
14

A float isn't as precise as you would like it to be. Its mantissa of an effective 24 bit only provides a precision of 7-8 decimal digits. Your example requires 10 decimal digits precision. A double has an effective 53 bit mantissa which provides 15-16 digits of precision which is enough for your purpose.

Peter G.
  • 14,786
  • 7
  • 57
  • 75
7

It's because your float type doesn't have the precision to display that number. Use a double.

Carl Norum
  • 219,201
  • 40
  • 422
  • 469
  • but should it increase or decrease? i suppose it should have decreased? – RubyDubee Aug 05 '10 at 20:37
  • 3
    I think it should round to the nearest available float number. I leave that as an exercise :-). – Peter G. Aug 05 '10 at 20:41
  • Precision loss is precision loss. The represented value is quantized, and can and will fall to either side of the intended value. IEEE-754 specified some rounding modes that might influence this, but that would depend on your compiler providing a way to access them for a compile time constant. – RBerteig Aug 05 '10 at 20:43
  • Cool i think i should read the specification carefully. Thanks anyways! – RubyDubee Aug 05 '10 at 20:46
  • @Webbisshh: rounding in C99 is controlled via fesetround() / fesetenv() / feupdateenv() – ninjalj Aug 05 '10 at 20:51
  • @ RBerteig I agree. However, I have to remark that "round to nearest" is the default mode specified by IEEE-754. – Peter G. Aug 05 '10 at 20:52
  • @ninjali: The `` functions effect rounding at runtime; this conversion may occur at compile time and will not necessarily be effected by their use. – Stephen Canon Aug 05 '10 at 21:09
  • @Peter G, IMHO, Round to nearest is the only mode that makes sense for translating source text, but you never know what a standards committee will decide. ;-). – RBerteig Aug 05 '10 at 21:22
2

floats only have 7 digits of precision See this link for more details: link text

When I do this, I get the same results:

int _tmain(int argc, _TCHAR* argv[])
{
    float f = 1094795585.00f; 
    //        1094795648.000000
    printf("%f \n",f); 
    return 0; 
}
C.J.
  • 15,637
  • 9
  • 61
  • 77
1

I simply don't understand why people use floats - they are often no faster than doubles and may be slower. This code:

#include <stdio.h>

union NumericType
{
    double value;
    int intvalue;
}Values;

int main()
{
    Values.value = 1094795585.00;
    printf("%lf \n",Values.value);
    return 0;
}

produces:

1094795585.000000
  • 3
    Sometimes the FPU is only single precision. Particularly SIMD implementations are often single precision. – Carl Norum Aug 05 '10 at 21:02
  • 3
    They may also be dramatically faster; if you're writing code for a smartphone, for example -- on some current ARM processors, float can be 8 (or more) times faster than double. – Stephen Canon Aug 05 '10 at 21:11
  • @Stephen Sure. But I suspect the majority of people here are using 80x86 architectures, and/or think that floats are naturally faster than doubles without having read the processor spec sheet or done any testing on whether the performance difference matters on their app. Speaking personally, I always prefer correctness (precision) over speed, until I'm forced to sacrifice it. –  Aug 05 '10 at 21:18
  • "I simply don't understand why people use floats" Also keep in mind space. If you're storing N-million floating-point numbers, you potentially save N megabytes of RAM. – Rooke Aug 05 '10 at 21:19
  • 1
    The other thing worth noting is that using `double` doesn't make this problem go away; it only changes the size of the numbers for which it occurs. Representation error is inherent to the use of any fixed-size type. – Stephen Canon Aug 05 '10 at 21:22
  • @Rooke But I don't think most people are storing millions of numbers that either - I know I never have. It seems to me that many (most?) questions here on SO use float when they could much better have used double. –  Aug 05 '10 at 21:23
  • 1
    @Stephen Of course - but using float instead of double for no particular reason will make you run into the problem more often. –  Aug 05 '10 at 21:24
  • Using 64-bit integers makes overflow bugs less likely than if you use 32-bit integers; would you say that you "simply don't understand why people use 32-bit integers"? – Stephen Canon Aug 05 '10 at 21:33
  • @Stephen I would use whatever integer size was natural for what I wanted my application to do. The natural real number type for the kind of applications I write, and I suspect for the majority of programmers here, is a double. –  Aug 05 '10 at 21:39
  • I agree with Neil 100 % here. Floats are best left for special purposes. Besides this simple example they can easily screw up numeric code in unforeseen ways if it is not crafted very carefully. – Peter G. Aug 05 '10 at 21:59
  • @Peter G., @Neil Butterworth: that's more or less my point, though. *no* type should be used without consideration. The solution isn't "always use double"; the solution is "think about what you're doing, and use the appropriate type". – Stephen Canon Aug 05 '10 at 23:10
0

By default a printf of float with %f will give precision 6 after the decimal. If you want a precision of 2 digits after the decimal use %.2f. Even the below gives same result

#include <stdio.h>
union NumericType
{
    float value;
    int intvalue;
}Values;

int main()
{
    Values.value = 1094795585;
    printf("%f \n",Values.value);
    return 0;
}

Result 
./a.out
1094795648.000000
chaitanyavarma
  • 315
  • 2
  • 4
  • 8
0

It only complicates things to speak of decimal digits because this is binary arithmetic. To explain this we can begin by looking at the set of integers in the single precision format where all the integers are representable. Since the single precision format has 23+1=24 bits of precision that means that the range is

0 to 2^24-1

This is not good or detailed enough for explaining so I'll refine it further to

0 to 2^24-2^0 in steps of 2^0

The next higher set is

0 to 2^25-2^1 in steps of 2^1

The next lower set is

0 to 2^23-2^-1 in steps of 2^-1

Your number, 1094795585 (0x41414141 in hex), falls in the range that has a maximum of slightly less than 2^31 =. That range can be expressed in detail as 0 to 2^31-2^7 in steps of 2^7. It's logical because 2^31 is 7 powers of 2 greater than 24. Therefore the increments must also be 7 powers of 2 greater.

Looking at the "next lower" and "next higher" values mentioned in another post we see that the difference between them is 128 i e 2^7.

There's really nothing strange or weird or funny or even magic about this. It's actually absolutely clear and quite simple.

Olof Forshell
  • 3,169
  • 22
  • 28