4

I entered the following code (and had no compiling problems or anything):

float y = 5/2;
printf("%f\n", y);

The output was simply: 2.00000

My math isn't wrong is it? Or am I wrong on the / operator? It means divide doesn't it? And 5/2 should equal 2.5?

Any help is greatly appreciated!

alk
  • 69,737
  • 10
  • 105
  • 255
Alex Lord
  • 111
  • 2
  • 2
  • 11
  • 4
    Because the *result* is `float`. But the operands and hence the operation itself is integer. – Eugene Sh. Oct 26 '16 at 14:09
  • [Related question here](https://stackoverflow.com/questions/27674295/why-do-we-separately-cast-to-float-in-an-integer-division). – WhozCraig Oct 26 '16 at 14:11

3 Answers3

8

5 is an int and 2 is an int. Therefore, 5/2 will use integer division. If you replace 5 with 5.0f (or 2 with 2.0f), making one of the ints a float, you will get floating point division and get the 2.5 you expect. You can also achieve the same effect by explicitly casting either the numerator or denominator (e.g. ((float) 5) / 2).

R_Kapp
  • 2,818
  • 1
  • 18
  • 32
3

Why does 5/2 results in '2' even when I use a float?

Because you do not "use float". 5/2 is an integer division. Only its result (2) gets implicitly converted to a float to become a 2. (mind the dot).

alk
  • 69,737
  • 10
  • 105
  • 255
0

You should do proper type-casting .

float y = (float) 5/2 

Program will treat the numbers as int.
It is dividing two ints and writing this to float. Hence, answer is 2.0

You must type cast

Swanand
  • 4,027
  • 10
  • 41
  • 69