#include <stdio.h>
int main(void)
{
float c =8/5;
printf("The Result: %f", c);
return 0;
}
The answer is 1.000000
. Why isn't it 1.600000
?
#include <stdio.h>
int main(void)
{
float c =8/5;
printf("The Result: %f", c);
return 0;
}
The answer is 1.000000
. Why isn't it 1.600000
?
C is interpreting your 8/5
input as integers. With integers, C truncates it down to 1.
Change your code to 8.0/5.0
. That way it knows you're working with real numbers, and it will store the result you're looking for.
The expression
8/5
is an all int
expression. So, it evaluates to (int )1
The automatic conversion to float
happens in the assignment.
If you convert to float before the divide, you will get the answer you seek:
(float )8/5
or just
8.0/5
When you don't specify what data types you use (for example, in your code you use the integer constants 8
and 5
) C uses the smallest reasonable type. In your case, it assigned 8
and 5
the integer type, and because both operands to the division expression were integers, C produced an integer result. Integers don't have decimal points or fractional parts, so C truncates the result. This throws away the remainder of the division operation leaving you with 1 instead of 1.6.
Notice this happens even though you store the result in a float. This is because the expression is evaluated using integer types, then the result is stored as is.
There are at least two ways to fix this:
Cast the part of the expression to a double type or other type that can store fractional parts:
Here 8
is cast to the double
type, so C will perform float division with the operands. Note that if you cast (8 / 5)
, integer division will be performed before the cast.
foo = (double) 8 / 5
Use a double as one of the operands:
foo = 8.0/5