0

In the code below,

    #include <stdio.h>

    float area(float var)
    {
        return (var*var);
    }

    int main()
    {
       float side;
       printf("\nEnter the side of square : ");
       scanf("%f",&side);

       **printf("The area is : %.1f",area(side));**

       return 0;
    }

Input is 15.5, if format specifier "%.1f" is used then the output is:

240.3

and when,
Input is 15.5, with format specifier "%.2f" then the output is :

240.25 (which is actually the correct value)



Why does the value gets rounded off when format specifier is "%.1f", instead of just printing upto the first decimal place and the output being 240.2 ?
Steve Summit
  • 45,437
  • 7
  • 70
  • 103
  • 4
    Because high-quality implementations of `printf` *do* round when you give them format specifiers with explicit precisions, like`%.1f`. It's usually what you want. See for example the discussion of 0.3 in ikegami's comment just below. – Steve Summit Jun 04 '22 at 16:16
  • 2
    The answer is probably "because the spec says so", but I don't know what the spec says. If it's silent, then the answer is probably "because it makes the most sense to print the closet representable value". Note that if it truncated, `printf("%.1f", 0.3)` would print `0.2` since 0.3 is actually 0.299999999999999988897769753748434595763683319091796875 – ikegami Jun 04 '22 at 16:17
  • If `printf` didn't round off floating-point numbers when printing them, we'd get even more questions of the form [Is floating point math broken?](https://stackoverflow.com/questions/588004) than we already do. And, seriously, the fact that floating-point numbers are binary internally, and therefore never quite match the decimal values you expect when printing them back out, means that rounding (as opposed to truncating) really is the right thing to do. – Steve Summit Jun 04 '22 at 16:27
  • Stated another way, it's a general principle that the right way to minimize errors when doing floating-point arithmetic is to carry around a little more precision than you need, then round off to your required precision at the very end. And `printf` is no exception to this rule; in fact `printf` is often an important tool in implementing this rule. (It turns out it's actually quite difficult to implement `printf` so it rounds properly in every case, but high-quality implementations go the extra mile in order to get it right, and that's a good thing.) – Steve Summit Jun 04 '22 at 16:31
  • @Free_loader And speaking of "carrying around a little more precision than you need", that's why it's almost always a good idea to use type `double`, *not* `float`. The precision of type `float` is low enough that it'll often give you small but annoying errors. The precision of type `double` is much better, and usually enough to give you good results. – Steve Summit Jun 04 '22 at 16:35

1 Answers1

2

The spec mandates rounding. Of the f format specifier, the C17 spec says

A double argument representing a floating-point number is converted to decimal notation in the style [-]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.

Note the last sentence.

Now, there are many ways of rounding. I think printf is directed to honour the "directed rounding mode" set by fesetround, and it's no exception on my system.

#include <fenv.h>
#include <stdio.h>

// gcc doesn't recognize or need this.
#pragma STDC FENV_ACCESS ON

int main( void ) {
   fesetround( FE_DOWNWARD );
   printf( "%.1f\n", 0.66 );   // 0.6

   fesetround( FE_TONEAREST );
   printf( "%.1f\n", 0.66 );   // 0.7
}

Demo on Compiler Explorer


My system defaults to rounding to nearest. When it needs to choose which of two numbers are nearest (e.g. for 0.5), it rounds to nearest even number.

#include <stdio.h>

int main( void ) {
   printf( "%.0f\n", 0.5 );  // 0
   printf( "%.0f\n", 1.5 );  // 2
   printf( "%.0f\n", 2.5 );  // 2
   printf( "%.0f\n", 3.5 );  // 4
   printf( "%.0f\n", 4.5 );  // 4
}

Demo on Compiler Explorer

(I used .0f and .5 since 5/10 can be represented exactly by a floating point number.)


I don't know if the spec mandates defaulting to round to nearest or not, but that's the natural thing to do when one wants to reduce the number of decimal places in a number.

It would also produce a lot of weird results if it didn't do this. Would you expect printf( "%.1f\n", 0.3 ); to print 0.2? Well, it would if you rounded down instead of rounding to nearest.

So many number are periodic in binary, including 1/10, 2/10, 3/10, 4/10, 6/10, 7/10, 8/10 and 9/10. These can't be represented exactly using floating point numbers. Ideally, the compiler uses the nearest representable number instead, and this number is sometimes a little higher, sometimes a little lower.

#include <stdio.h>

int main( void ) {
   printf( "%.100g\n", 0.1 );  // 0.1000000000000000055511151231257827021181583404541015625
   printf( "%.100g\n", 0.3 );  // 0.299999999999999988897769753748434595763683319091796875
}

Demo on Compiler Explorer

If printf were to truncate, printf( "%.1f\n", 0.3 ); would print 0.2.

#include <fenv.h>
#include <stdio.h>

// gcc doesn't recognize or need this.
#pragma STDC FENV_ACCESS ON

int main( void ) {
   fesetround( FE_DOWNWARD );
   printf( "%.1f\n", 0.3 );   // 0.2

   fesetround( FE_TONEAREST );
   printf( "%.1f\n", 0.3 );   // 0.3
}

Demo on Compiler Explorer


Finally, I don't find anything in the spec about how to round to nearest. The decision to round to even appears to be the compiler's.

This tie-breaking rule has no positive/negative bias and no bias toward/away from zero, making it a natural choice. It's even "the default rounding mode used in IEEE 754 operations for results in binary floating-point formats."

ikegami
  • 367,544
  • 15
  • 269
  • 518