#include <stdio.h>
int main()
{
float a = 355/113;
printf("%f", a);
return 0;
}
Why is this returning 3.0000 instead of 3.141592?
#include <stdio.h>
int main()
{
float a = 355/113;
printf("%f", a);
return 0;
}
Why is this returning 3.0000 instead of 3.141592?
Because 355/113
is integer division, not floating-point division. The decimal portion is getting truncated off before the result is assigned to the float
.
Try this instead:
float a = 355.0f / 113.0f;
The division is being performed using integer arithmetic and the result of the division is converted to a float. If you want float division use floating point literals such as 355.0 and 113.0.
You are dividing integers but you want float
, so just do this:
int main()
{
float a = 355/(float)113;
printf("%f", a);
return 0;
}