0

I made a simple C program to get a float as input from the user using scanf() function and dispay it using printf() function. But when the input number is quite large, the output deviates slightly. Why might that happen? For example: if I give input as 2345, out put will be as expected. But if I give the output as 1234567898, the output will be unexpected.

#include <stdio.h>

int main()
{
    float number;
    scanf("%f", &number);
    printf("The number %f", number);
    return 0;
}
  • 1
    Floating point numbers don't have infinite precision. If you want to increase the precision, use `double` instead of `float`. That will buy you some additional precision, but keep in mind that it's finite as well. – Tom Karzes Jun 08 '21 at 08:26
  • Please take some time to read [the help pages](http://stackoverflow.com/help), take the SO [tour], read [ask], as well as [this question checklist](https://codeblog.jonskeet.uk/2012/11/24/stack-overflow-question-checklist/). Then please [edit] your question to clearly show the input you give, and the actual output you get. – Some programmer dude Jun 08 '21 at 08:27
  • 1234567898 is a 31-bit number. A single-precision float only has 24-bit precision. Use a `double` variable for better precision, or a `long long int` if you need to work with large integer values. – r3mainer Jun 08 '21 at 08:28
  • Or this? https://stackoverflow.com/questions/23420783/ – user3386109 Jun 08 '21 at 08:29
  • 3
    Could be a FAQ: [Is floating point math broken?](https://stackoverflow.com/q/588004/3545273)... – Serge Ballesta Jun 08 '21 at 09:05

1 Answers1

0

Single precision floating point (float) is good for approximately 6 significant decimal digits. double allows 15 significant figures. 1234567898.0 has 10 significant figures. Only the most significant first 6 digits can be relied on.

Clifford
  • 88,407
  • 13
  • 85
  • 165