2

I'm fairly new to programming, and am starting to get used to C. I apologize if this is a repeat question (I don't know the underlying process i.e. what to search for).

I'm working with a simple program to get used to the nuances of data types:

main(){
    int i;
    float a,b;


    i = 2;
    a = 2.0;
    b = 4.0;

    printf("%d %1.1f", i/b,a/b);
}

I expected the program to print 0 0.5 (since a and b are both floats and I am printing their ratio as a float), yet the program printed 0 0.0 (I'm using gcc -o). However, when I reverse the printf order (without switching the order of corresponding variables), that is:

printf("%1.1f %d", i/b,a/b);

The print result is 0.5 0. I'm not exactly sure what's happening here. It appears that in the first program b is being converted to int in i/b and fails to be converted to float in a/b. However, in the second variant, b doesn't have any trouble printing out as two different types. Can ints not be coerced into floats? Can someone explain this or point me in the right direction?

Thanks!

byrnesj1
  • 189
  • 1
  • 14
  • Could you post the result of `gcc --version`? – Schwern Aug 12 '15 at 23:41
  • possible duplicate of [Why dividing two integers doesn't get a float?](http://stackoverflow.com/questions/16221776/why-dividing-two-integers-doesnt-get-a-float) – Maddy Aug 12 '15 at 23:41
  • 2
    Wait, this isn't an integer division problem... – Purag Aug 12 '15 at 23:42
  • `a` and `b` are both floats, so the result of that division should be `0.5`, yet printing that as a float results in `0.0`... – Purag Aug 12 '15 at 23:42
  • @Schwern gcc (GCC) 4.6.3 20111208 (prerelease) using gcc -o, -Wall gives some helpful warnings. On the first variant: '%d' expects argument of type 'int', but argument 2 has type double. However, I still get the same (0 0.0) result in the executable. – byrnesj1 Aug 12 '15 at 23:44
  • I'm not satisfied with any of the answers; while it's undefined behavior, I'd like to know exactly what is happening to render this result. I'll answer shortly. :) – Purag Aug 13 '15 at 00:00

2 Answers2

7

You are invoking undefined behaviour.

printf is a vararg function, there is no link between the format string and how the other arguments are passed to the function. The usual rules for / apply so i/b stays a double and is passed as it is to printf, this causes undefined behaviour when the function tries to read it as an int.

If you compile with -Wall you will see these warnings

a.c: In function ‘main’:
a.c:12:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘double’ [-Wformat=]
     printf("%d %1.1f", i/b,a/b);
             ^
dfogni
  • 763
  • 9
  • 14
  • 1
    So, the fix is... either use the `%f` format specifier for both expressions, or force the expression that is using the `%d` specifier to be an `int` (perhaps by using a cast to make it so). – Michael Burr Aug 12 '15 at 23:53
3

I get a different result.

#include <stdio.h>

int main(){
    int i;
    float a,b;


    i = 2;
    a = 2.0;
    b = 4.0;

    printf("%d %1.1f", i/b,a/b);
}

1432933728 0.5

gcc -Wall provides a warning. You should always be running with -Wall to get helpful warnings.

$ gcc -Wall try.c 
try.c:12:24: warning: format specifies type 'int' but the argument has type 'float' [-Wformat]
    printf("%d %1.1f", i/b,a/b);
            ~~         ^~~
            %f
1 warning generated.

printf is doing something bizarre when it interprets the float as an integer. C is not intelligent about data and will simply apply the raw binary floating point number as an integer. It will usually get garbage.

When I fix that by changing it to printf("%1.1f %1.1f", i/b,a/b); I get the expect result of 0.5 0.5.

This is Apple's "gcc", which is actually clang.

$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.4.0
Thread model: posix

While I got a different result, the lesson is you can't cast a float to a decimal with %d. Generally, function argument types will not be cast for you, even printf. You have to do it with explicit type casting. You could have written this...

printf("%1.1f %d", i/b, (int)(a/b));

Note the extra parens are necessary because type casting has a higher precedence than division.

Rather than relying on type casting, you get better control by explicitly rounding a floating point result using ceil(), floor(), round() and related functions.

Schwern
  • 153,029
  • 25
  • 195
  • 336
  • 2
    Not bizarre, it's just treating the IEEE 754 floating point format as a binary integer, which is understandably just a large number. – Purag Aug 12 '15 at 23:43
  • @Purag It only makes sense if you already know about IEEE floating point numbers. Most humans think of numbers as numbers. Most programmers only have experience with high level languages which abstract the internal representation. The ways computers represent numbers and do math have little in common with how we were all taught math in school. Yes, it's bizarre. That you don't think it's bizarre... is bizarre. :) – Schwern Aug 13 '15 at 00:28
  • It's bizarre for someone just learning, indeed. However, I take every answer as an opportunity to peel back a layer of abstraction, since it can only help the programmer to know exactly what's going on under the hood. This is the way I learned C, and I think it helps me. :) I don't underestimate people reading this answer, so I think it would do better with a brief explanation of how the floating point is being interpreted in the way it is. :) – Purag Aug 13 '15 at 00:48
  • @Purag I think introducing type casting is plenty for one answer. As for why the OP is getting that odd result from `printf()`, `0 0.0` instead of the expected `0 0.5`, I'm chalking it up to that some internal state in `printf()` got corrupted after getting a float for `%d`, or maybe the optimizer got confused. – Schwern Aug 13 '15 at 01:47
  • Thanks for the insight, it really helps. I am still a little puzzled by my output, however I think the problems are compiler specific and not general. In your output, I can grasp that `%d` converts i/b to its binary representation, and `%1.1f` correctly prints `0.5` = a/b. What I don't understand is why trying to print `i/b` as `%d` would affect printing `a/b` as `%1.1f` in my output (if I put `%1.1f` <- `a/b` before `%d` <- `i/b` I get `0.5 0`). Further, `printf("%d %1.1f", i/b, a)`; gives me `0 0.0` -- which again I think has to do with my compiler. Anyways, thanks again. – byrnesj1 Aug 13 '15 at 09:31
  • 1
    @byrnesj1: The `%d` doesn't *convert* `i/b` to its binary representation. It's already in its binary representation. The `%d` format causes `printf` to *assume* that the argument is of type `int`. It's the lack of a conversion that causes the problem. (A *conversion* would translate a value of one type to a value of another type, keeping the same mathematical value if possible.) – Keith Thompson Aug 13 '15 at 15:05
  • The trick here is that the same pattern of bits can mean very, very different things based on how it's interpreted. `int`, `float`, and all the types are just different ways of interpreting bit patterns. – Purag Aug 13 '15 at 18:54
  • @byrnesj1 *`What I don't understand is why trying to print i/b as %d would affect printing a/b as %1.1f`*. Yes, that is puzzling. My best idea is that once you've fed printf the wrong types, something about it's internal state got corrupted. This would change from implementation to implementation and version to version. – Schwern Aug 13 '15 at 19:34