0

I have a program in which if I use floats it outputs 29.325001, but if I use double it's 29.325000 (that's the right result).

I thought that float rounds up to 14 decimal places, but it's producing a different result than expected?

Code below:

    int elements, i;

scanf("%d", &elements);

if (elements > 10){
    return 0;
}

float ppp[10], takeHowMuch[10], total;

for (i = 0; i < elements; i++){
    scanf("%f", &ppp[i]);
}

for (i = 0; i < elements; i++){
    scanf("%f", &takeHowMuch[i]);
}

total = (takeHowMuch[0] * ppp[0]) + (takeHowMuch[1] * ppp[1]) + (takeHowMuch[2] * ppp[2]) + (takeHowMuch[3] * ppp[3]);

printf("%.6f", total);

return 0;
alcatraz
  • 41
  • 2
  • 6
  • Why would you expect 14 decimal places of precision from a 32-bit float (which has at best 23 bits of significand)? – EOF Nov 06 '18 at 21:53
  • 1
    "I thought that float rounds up to 14 decimal places," --> [Typical `float`](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) shown as decimal encodes as expected to at least 6 leading significant digits. `29.325001` is 8. What is the source of your 14? – chux - Reinstate Monica Nov 06 '18 at 21:53
  • 1
    This is worth a look: https://stackoverflow.com/questions/588004/is-floating-point-math-broken – yano Nov 06 '18 at 21:53
  • Thanks chux, that does explain it – alcatraz Nov 06 '18 at 21:55

1 Answers1

1

"I thought that float rounds up to 14 decimal places,"

Code precision expectations are too high.

Typical float shown as decimal encodes as expected to at least 6 leading significant digits. 29.325001 is 8.

Use FLT_DIG

// printf("%.6f", total);
printf("%.*e\n", FLT_DIG - 1, total);
printf("%.*g\n", FLT_DIG, total);

Output

2.93250e+01
29.325
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256