40

Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?

#include <stdio.h>

void main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / 350;
    c = 750;
    d = c / 350;
    printf("%.2f %.2f", b, d);
    // output: 2.00 2.14
}

http://codepad.org/j1pckw0y

mushroom
  • 1,909
  • 3
  • 16
  • 33
  • 5
    "Why?" - Because the language was designed that way. If you want a `float`, you cast to a `float` first. – Mysticial Apr 25 '13 at 18:13
  • 1
    Just because the left-hand side of an assignment is a float doesn't mean that the right hand side has to be--it only means that the right-hand side must offer _equal or less precision_ that a float, hence the compiler has no reason to make it anything other than int. – ApproachingDarknessFish Apr 25 '13 at 18:14
  • Because `a` and `350` are `int`s. – Daniel Fischer Apr 25 '13 at 18:15
  • 1
    Because that's the way Kernighan and Ritchie defined it. – Hot Licks Apr 25 '13 at 18:27
  • 1
    @ValekHalfHeart “equal or less precision” has nothing to do with it. `int i = 2.0;` and `double d = 1;` are both valid, whatever your definition of “precision” is. – Pascal Cuoq Apr 25 '13 at 18:34

7 Answers7

49

This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.

#include <stdio.h>

int main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / 350.0f;
    c = 750;
    d = c / 350;
    printf("%.2f %.2f", b, d);
    // output: 2.14 2.14
    return 0;
}
Sukrit Kalra
  • 33,167
  • 7
  • 69
  • 71
21

Use casting of types:

int main() {
    int a;
    float b, c, d;
    a = 750;
    b = a / (float)350;
    c = 750;
    d = c / (float)350;
    printf("%.2f %.2f", b, d);
    // output: 2.14 2.14
}

This is another way to solve that:

 int main() {
        int a;
        float b, c, d;
        a = 750;
        b = a / 350.0; //if you use 'a / 350' here, 
                       //then it is a division of integers, 
                       //so the result will be an integer
        c = 750;
        d = c / 350;
        printf("%.2f %.2f", b, d);
        // output: 2.14 2.14
    }

However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.

Cacho Santa
  • 6,846
  • 6
  • 41
  • 73
  • 2
    Would this produce the same result? `b = (float) a / 350;` – mushroom Apr 25 '13 at 18:16
  • 1
    Yes, it would. You need to typecast any one to a `float`. – Sukrit Kalra Apr 25 '13 at 18:18
  • 8
    Actually, `350.0` is a `double`, and the result of the operation would be a `double` that is then converted to `float` for the assignment. `350.0f` would be a `float`. – Daniel Fischer Apr 25 '13 at 18:18
  • 2
    @DanielFischer Thanks for the clarification. What actually is the difference between a double and a float? – mushroom Apr 25 '13 at 18:21
  • `double` is simply double precision. It has twice the precision offered by a `float`. – Sukrit Kalra Apr 25 '13 at 18:23
  • double has more precision because it contains more bytes to hold the value. try this `printf("double = [%d], float = [%d]", sizeof(double), sizeof(float));` to check it by your self. – Cacho Santa Apr 25 '13 at 18:23
  • 6
    @jon The range and precision of the types. They _may_ be equal, but usually, a `float` is a 32-bit type with 24 bits of precision, the largest finite number it can store is 3.4028235e38, the smallest positive number 1.0e-45, while `double` is a 64-bit type with 53 bits of precision capable of storing numbers up to 1.7976931348623157e308 and as small as 5.0e-324. – Daniel Fischer Apr 25 '13 at 18:27
2

"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.

You should do it like this

b = a / 350.0;
2

Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.

EDIT: my answer makes me think of the classic old man saying "when I was your age..."

Daniel Santos
  • 3,098
  • 26
  • 25
2

Chapter and verse

6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.105) If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is undefined.

105) This is often called ‘‘truncation toward zero’’.

Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.

John Bode
  • 119,563
  • 19
  • 122
  • 198
0

Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 1
    Why the downvote? The expectation that integer division give exact results is probably the best reason that division should not implicitly "promote" to a floating point type. (Incidentally, this is one of the things that makes Lua's treatment of numbers hideous to work with.) – R.. GitHub STOP HELPING ICE Apr 25 '13 at 18:40
  • The question *can* be interpreted as “Why does `int/int` not evaluate as a float **since I immediately assign it to a float variable**?”. Your answer, although insightful, does not address what *may* actually be the OP's misunderstanding. Apart from that, maybe it was downvoted because it was just too concise. I thought about it for five minutes and I arrived to the conclusion that it was funny and/or insightful for a reason other than the one you eventually gave. – Pascal Cuoq Apr 25 '13 at 18:59
0

Dividing two integers will result in an integer (whole number) result.

You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.

kyle_13
  • 1,173
  • 6
  • 25
  • 47