0

I am going through a C application code and find that float by integer division increases the precision and I don't understand why. E.g. dividing a floating point number 12926.0 by the integer 100 should result in 129.26, but instead I am getting the result 129.259995

Below is a simple code representation of the actual code.

#include<stdio.h>
#include<stdlib.h>

void main()
{
    int i=12926;
    float k=0.0;

    k=(float) i / 100;

    printf("k <%f>\n",k);
}

I have searched for explanations and tried evaluating values and expressions while debugging in GDB -

(gdb) print 12926.0 / 100.0
$39 = 129.25999999999999
(gdb) print (float) i / 100
$40 = 129.259995
(gdb) print i / 100
$41 = 129

Flummoxed why C is showing the result as 129.259995 instead of 129.26. Is there a way to control the precision length to get 129.26 as the result?

  • What does your title mean? `k=(float) i / 100` in your program uses `float` division. `12926.0 / 100.0` in the debugger uses `double` division. ` (float) i / 100` in the debugger uses `float` division. It uses an integer divisor (that is converted to `float` to match the numerator), but the result is not more accurate, and is not shown with more precision, than the `double` operation. `i / 100` uses integer division, and its result is not more accurate than the floating-point operations. What do you mean that “Floating-point to integer division increases the precision”? – Eric Postpischil Mar 25 '23 at 11:30
  • @user3363546, "dividing a floating point number 12926.0 by the integer 100 should result in 129.26" is incorrect. `float` typically can encode exactly nearly 2^32 different values. `129.26` is **not** one of them. The closest `float` is exactly 129.2599945068359375. Printing that to 6 places after the `.` is 129.259995. – chux - Reinstate Monica Mar 25 '23 at 11:31

1 Answers1

1

From printf()'s man page:

f, F The double argument is rounded and converted to decimal notation in the style [-]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is explicitly zero, no decimal-point character appears. If a decimal point appears, at least one digit appears before it.


Is there a way to control the precision length to get 129.26 as the result?

Yes, specify precision:

printf("k <%.2f>\n",k);
Harith
  • 4,663
  • 1
  • 5
  • 20