-4

Can please someone explain this behavier :

#include <stdio.h>

int main(int argc, char **argv){

   printf("%f", ( 1 / 2 ) );     
   return 0;

} 

/* output : 0.00000 */
Enamul Hassan
  • 5,266
  • 23
  • 39
  • 56
Tiger
  • 404
  • 1
  • 4
  • 13
  • 4
    try `printf("%f", ( 1 / 2.0 ) );` , 1 / 2 is an integer division – quantdev Jun 27 '15 at 03:19
  • 1
    See a very similar question: [Why does this output of the same expression from printf differ from cout?](http://stackoverflow.com/q/19102778/1708801) – Shafik Yaghmour Jun 27 '15 at 03:20
  • In addition to the fact that the division yields `0`, it yields `0` *of type `int`*. You print it with `%f` which requires an argument of type `double`. The behavior is undefined. Typically an `int` 0 and a `double` 0 are both represented as all-bits-zero, but that's not guaranteed, and they're commonly of different sizes. – Keith Thompson Jun 27 '15 at 05:20

2 Answers2

3

Exactly what quantdev said. The people who wrote the C language thought something along the lines of "hey let's just make the dividend of any two integers an integer, because integers are super useful and maybe getting a float would mess with your style when you're trying to index an array". So the C Compiler proceeds to toss the remainder into the garbage and you're left with 0.

1 / 2

To declare that you want your darn double, ( or float ) you better make one of the two numbers in the division a float! Thus,

1 / 2.0

and, in context...

not what you want:

printf("%f", ( 1 / 2 ) ); ,

what you want:

printf("%f", ( 1 / 2.0 ) ); ,

Charlie
  • 331
  • 3
  • 6
  • The compiler doesn't "think" *hey, I'll bet their dividend is an integer*. The dividend *is* an integer (more precisely, a value of type `int`) by definition. You make it sound like the compiler is making a decision; in fact, the rules are strictly defined by the language. – Keith Thompson Jun 27 '15 at 05:19
  • Thanks for the comment Keith, I reworded a bit, put the onus on the creators of the C language. Hopefully this is better but I wouldn't mind editing again if it's still too personifying and assuming. – Charlie Jun 27 '15 at 16:27
2

1 / 2 is not a floating-point expression.

printf("%f", ( 1 / 2 ) );

The inner parentheses are unnecessary; this is a bit easier to read as:

printf("%f", 1 / 2);

In most cases, the type of an expression in C is determined by the expression itself, not by the context in which it appears. This applies even to subexpressions of larger expressions.

The arithmetic operators +, -, *, and / always take two operands of the same numeric type, and yield a result of that type. There are rules to convert the operands to a common type, but 1 and 2 are both of type int, so we needn't worry about that. All these operators, if invoked with int operands, yield an int result. Integer division truncates, discarding any remainder, so 1 / 2 yields the int value 0.

So the above is equivalent to:

printf("%f", 0);

The "%f" format requires an argument of type double; 0 is of type int. For most functions, there would be an implicit conversion, but the types of the parameters are determined by the format string, not by the function declaration, so the compiler doesn't know what type to convert to. (Consider that the format string doesn't have to be a string literal.) Passing an int argument with a "%f" format has undefined behavior. In your case, it just happened to print 0. We could speculate about how that happened, but it doesn't matter; you need to fix the code.

If you wanted to print that int value, you could use "%d":

printf("%d", 1 / 2);

But you probably want 0.5. You can get that by using operands of type double:

printf("%f", 1.0 / 2.0);

(You could change just one of the two operands to a floating-point constant, but it's clearer to change both.)

Finally, you should print a newline at the end of your output:

printf("%f\n", 1.0 / 2.0);
Keith Thompson
  • 254,901
  • 44
  • 429
  • 631