0

The following piece of code :

int main()
{
float a=0.7;
printf("%.10f %.10f\n",0.7, a);
return 0;
}

gives 0.7000000000 0.6999999881 as answer while the following piece of code :

int main()
{
float a=1.7;
printf("%.10f %.10f\n",1.7, a);
return 0;
}

gives 1.7000000000 1.7000000477 as output.

Why is that in the first case upon printing a I got a value less than 0.7 and in the second case more than 1.7?

Dubby
  • 1,154
  • 3
  • 16
  • 31

1 Answers1

3

When you pass a floating-point constant to printf, it is passed as a double.

So it is not the same as passing a float variable with "the same" value to printf.

Change this constant value to 0.7f (or 1.7f), and you'll get the same results.

Alternatively, change float a to double a, and you'll also get the same results.


Option #1:

double a = 0.7;
printf("%.10f %.10f\n",0.7,a);
// Passing two double values to printf

Option #2:

float a = 0.7;
printf("%.10f %.10f\n",0.7f,a);
// Expanding two float values to double values and passing them to printf
barak manos
  • 29,648
  • 10
  • 62
  • 114