-1

Possible Duplicate:
strange output in comparision of float with float literal

So the program just reads in a bunch of numbers, and finds the average of that by dividing by the total numbers entered. However, the final result adds in a few more decimals at the end and I'm not sure why it does that.

For this given input: 483, 10, 3051, 188, 200, 0

The output should be 786.4 BUT INSTEAD it is 786.400024. What am I doing wrong? Thanks in advance fellas.

int main(int argc, char** argv)
{
    int averageOfNumbers = 0;

    printf("Enter the sequence of numbers:");
    int nextNumber;
    float numberCounter = 0;
    do
    {
            scanf("%d", &nextNumber);
            if(nextNumber > 0)
            {
                    numberCounter++;
                    averageOfNumbers += nextNumber;
            }
    } 
    while(nextNumber > 0);
    float finalAverage = (float) (averageOfNumbers/numberCounter);
    averageOfNumbers = averageOfNumbers/numberCounter;
    printf("Average of the numbers in the sequence is %f\n", finalAverage);

}

Community
  • 1
  • 1

3 Answers3

3

Floating point numbers provide a precise, but inexact approximation to real numbers. (How precise depends on their size: how many bits of precision are available in the floating point type you're using.)

IEEE 754 floating point (a very popular representation used on many computers) uses binary. This means that binary fractions such as 1/2, 1/4, 1/8 and combinations thereof: 3/8, 5/16, etc are represented exactly (within the limits of how many bits of precision are available). Fractions not based on powers of two are not representable exactly.

The number 1/10 or 0.1 has no exact representation. When you input 0.1 into the machine, it is converted to a value with many binary digits that is close to 0.1. When you print this back with printf or what have you, you get 0.1 again because the printf function rounds it off, and so it looks like 0.1 is being exactly represented: 0.1 went into the machine, and by golly, 0.1 came out. Suppose that it takes 10 decimal digits of precision to see the difference between 0.1 and the actual value, but you are only printing to 8 digits. Well, of course it will show as 0.1. Your floating point printing routine has chopped off the error!

The %f conversion specifier in the C printf will use more digits for larger numbers because it uses a fixed number of places past the decimal point, with the default being six. So for instance 0.002 will print as 0.002000. But the number 123456 will print as 123456.000000. The larger the number in terms of magnitude, the more significant figures are printed. When you print 786.4 with %f, you're actually asking for 9 decimal digits of precision. Three for the 786 integer part and then six more.

You're using float which is most likely a 32 bit IEEE 754 float. This has only 24 bits of precision. (1 bit is used for the sign, and 7 for the binary exponent, leaving 24.) 24 bits is equivalent to only about 7 decimal digits of precision!

So, you're asking the machine to print 786.4 (for which it has an inexact representation, remember!) to nine significant figures, from a floating-point representation that is only good for about 7 decimal significant figures. You're asking for two more digits of precision which are not there, and therefore getting two error digits.

So what you can do is use a wider type like double and/or change the way you're printing the result. Do not ask for so many significant figures. For instance try %.3f (three digits past decimal point).

The float type in C should rarely be used by the way. You usually want to be working with the type double. float has its uses, such as saving space in large arrays.

Kaz
  • 55,781
  • 9
  • 100
  • 149
0

Using a double will solve your problem. The accuracy and precision of float is the cause of the problem.

kleopatra
  • 51,061
  • 28
  • 99
  • 211
nims
  • 3,751
  • 1
  • 23
  • 27
  • 2
    Using a double simply sweeps the problem further under the rug. If you print that value to more signficant figures under double, you will again reveal the fact that the result does not have an exact representation in double either. – Kaz Apr 09 '12 at 07:34
0

Floating point types cannot represent all numbers with perfect accuracy - many numbers can only ever be represented approximately with a floating point; this means that given enough significant digits, you'll eventually see a loss of precision for those numbers which can't fit neatly into the representation.

The problem with using the float data type is that it's the least accurate of the floating-point types. (The default floating point type in C and C++ is double - and this is the one you should use; you should avoid using float unless you have some compelling reason to use it otherwise). On a modern 32-bit desktop platform, a float might only be accurate to around 6-7 significant figures for some approximated numbers, a double on the same platform could be accurate to around 13-14 s.f for the same number (Note: that's significant figures and not decimal places!).

I strongly suggest that you spend time reading up on how floating point numbers work (Type it into google and you'll find loads of explanations!) ; understanding how they are represented internally will help make you a better programmer

Ben Cottrell
  • 5,741
  • 1
  • 27
  • 34
  • Understanding floating point will make you a better numeric analyst. If you want to be a better programmer, swear off using floating point for the rest of your life. I recently watched a talk recently by Gerald Sussman on infoq in which he admitted that he gets scared whenever there is a floating point number in the code. Wow, my thoughts exactly. :) – Kaz Apr 09 '12 at 18:26
  • 1
    The talk is here: http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute. And the remark starts at around 0:11:00. He says, *"... floating-point thingies that are dangerous"* and *"nothing brings fear to my heart more than a floating-point number"* and *"numerical analysis: the biggest black art I know"*. LOL! Cheers. – Kaz Apr 09 '12 at 18:26