4

I understand that in Java, if i divide two integers together, if the result isn't an integer, the fractional part is truncated and I get an integer result from the division.

This never made sense to me! I'm wondering if i could get some insight into why Java is designed to do this?

One of the answers here:

Why is the division result between two integers truncated?

Said that you usually expect an integer if you do an operation on two integers, but this just isn't true.

When calculating percentages, for example:(num_answers_correct / num_questions) is an example with two integers where I expect a fraction.

It seems dangerous to me to have to be so aware of what type the variable num_answers_correct is, etc, especially in a high-level language.

It's easy to accidentally perform an integer division when you wanted floating-point division, but never vice-verca. Wouldn't it be less error-prone to make the programmer indicate they intend to truncate the result, rather than make the programmer:

  • Realize that they are dividing two integers
  • Force floating point division using a float cast (or something like that)?

In the link mentioned above, someone said that visual basic 6 does exactly this -- / is an operator that returns a double, and \ is an operator that does integer division. this person said it was too confusing to have two operators; I don't see how it would be confusing, though.

So, my questions:

  1. Do I have a valid argument? or am I missing something?
  2. Why does Java use integer division?

Could someone help me see?

silph
  • 316
  • 2
  • 9
  • 4
    Unless there's someone here who was working for Sun when this decision was made, this question is essentially unanswerable. – Dawood ibn Kareem Mar 12 '14 at 03:01
  • 3
    Well, the answer is roughly "because that's how C did things", though that of course begs the question... – dlev Mar 12 '14 at 03:02
  • Right, let me rephrase. Unless Dennis Ritchie is here ... – Dawood ibn Kareem Mar 12 '14 at 03:03
  • int mynum = 16; int anotherNum = 2; float theAnswer = mynum / anotherNum; I don't see why this is confusing, personally. And it saves space at the cost of the developer paying attention. If anything signed versus unsigned would be a bigger issue. – Austin T French Mar 12 '14 at 03:03
  • 2
    A short and too simplistic answer: Java does it because C does it. C does it because the hardware (the Arithmetic-logic Unit) does it (too simplistic, I know, but a more complete answer would be something like that) – morgano Mar 12 '14 at 03:04
  • 2
    I doubt whether Dennis Ritchie was thinking of Sudoku when he designed C. – Dawood ibn Kareem Mar 12 '14 at 03:04
  • http://math.stackexchange.com/questions/126246/is-integer-division-uniquely-defined-in-mathematics – assylias Mar 12 '14 at 03:05
  • 3
    Java does it because C did it because BCPL did it because B did it because Algol-60 did it and Fortran did it, because the hardware did it, and still does it. Apologies to any steps I've left out along the way, and to any unwarranted inferences. Other languages that do it: Cobol, PL/1 and all its derivatives, Pascal, Modula, every flavour of Basic I've ever used, ... matter of fact I'm not aware of any language that *doesn't* do it. So don't blame Java. Ultimately it comes down to the fact that `int op int` yields `int` for *any* operator `op`, not just division. – user207421 Mar 12 '14 at 03:08
  • @EJP Languages that do automatic typecasting would be a decent exception, no? PowerShell, Python... – Austin T French Mar 12 '14 at 03:10
  • @AustinFrench I'm no expert on those, fortunately :-| If you're asserting that division of integers yields an FP value in Python I'm not sure you're correct. – user207421 Mar 12 '14 at 03:11
  • Base conversion would be cumbersome without integer division. So would, say, converting seconds to HH:MM:SS. Or converting a scalar index into a row/column in a 2D array. And infinite other common tasks you take for granted. You could always just cast to a float and round the results if you want. – Jason C Mar 12 '14 at 03:11
  • @EJP Python and PS will try to implicitly cast the type, so int / int will auto type cast to a decimal, fp or what it deems best unless the developer explicitly casts the variable. – Austin T French Mar 12 '14 at 03:15
  • 1
    @AustinFrench But I read [here](http://stackoverflow.com/a/2958717/207421) that "Python 2.x ... integer divisions will truncate instead of becoming a floating point number", and I find another SO question entitled ["how can I force division to be floating point in Python?"](http://stackoverflow.com/q/1267869/207421) where it is also stated that what you assert only starts in Python 3. Speaking as a language designer *inter alia* it strikes me as a ridiculously fundamental thing to change in release 3 of a language. – user207421 Mar 12 '14 at 03:20
  • @EJP You've never used VB then, I guess. I think that Microsoft actually got this right. There's a good deal of sense in using / for division and a different symbol entirely for divide-and-truncate. – Dawood ibn Kareem Mar 12 '14 at 04:13

0 Answers0