47

From the haskell report:

The quot, rem, div, and mod class methods satisfy these laws if y is non-zero:

(x `quot` y)*y + (x `rem` y) == x
(x `div`  y)*y + (x `mod` y) == x

quot is integer division truncated toward zero, while the result of div is truncated toward negative infinity.

For example:

Prelude> (-12) `quot` 5
-2
Prelude> (-12) `div` 5
-3

What are some examples of where the difference between how the result is truncated matters?

grom
  • 15,842
  • 19
  • 64
  • 67

3 Answers3

43

Many languages have a "mod" or "%" operator that gives the remainder after division with truncation towards 0; for example C, C++, and Java, and probably C#, would say:

(-11)/5 = -2
(-11)%5 = -1
5*((-11)/5) + (-11)%5 = 5*(-2) + (-1) = -11.

Haskell's quot and rem are intended to imitate this behaviour. I can imagine compatibility with the output of some C program might be desirable in some contrived situation.

Haskell's div and mod, and subsequently Python's / and %, follow the convention of mathematicians (at least number-theorists) in always truncating down division (not towards 0 -- towards negative infinity) so that the remainder is always nonnegative. Thus in Python,

(-11)/5 = -3
(-11)%5 = 4
5*((-11)/5) + (-11)%5 = 5*(-3) + 4 = -11.

Haskell's div and mod follow this behaviour.

grom
  • 15,842
  • 19
  • 64
  • 67
ShreevatsaR
  • 38,402
  • 17
  • 102
  • 126
  • 6
    "so that the remainder is always nonnegative" Technically, the sign of of `mod` follows the sign of the second operand. – newacct May 07 '09 at 04:41
  • it's to maintain the property that `(q,r) = divMod x y` if and only if `x = q*y + r`. Run an example, it's clever how it works out. – luqui Dec 17 '10 at 00:28
  • 3
    @luqui: No, that does not explain it. You can always have x=q*y+r with r nonnegative; e.g. if `divMod 11 (-5) = (-2, 1)` (instead of (-3,-4)), you'd still have "11 = (-2)*(-5) + 1". So your condition does not force the sign of `mod` to follow the second operand. BTW, the property that `x=q*y+r` is always true of quotRem as well, and there are always infinitely many pairs (q,r) such that `x=q*y+r` (and exactly two of these pairs have |r| – ShreevatsaR Dec 17 '10 at 04:09
  • hmm, yeah. Maybe `mod` is compensating for some related design decision in `div`? Not sure... – luqui Dec 17 '10 at 06:06
  • 2
    Actually, while C99 and C++11 do specify the truncation-to-0 definition, earlier versions left it implementation-defined, so "compatibility with C" is an unlikely (or at least bad…) reason for other languages or programs to go with truncation-to-0. – abarnert Aug 28 '13 at 23:01
  • But in Python `11 % -5 == -4`, the result is negative. How do you fit your *" the convention of mathematicians"* for the case when dividend >0 and divisor < 0, and then `a%b` is `<0`? – Rick Jan 17 '22 at 02:49
28

This is not exactly an answer to your question, but in GHC on x86, quotRem on Int will compile down to a single machine instruction, whereas divMod does quite a bit more work. So if you are in a speed-critical section and working on positive numbers only, quotRem is the way to go.

luqui
  • 59,485
  • 12
  • 145
  • 204
  • 2
    For solving the SPOJ primes, using rem rather than mod makes my test file run in 4.758s rather than 5.533s. This is means the quicker version is 16% quicker under 32-bit Ubuntu, Haskell Platform 2011. – Tim Perry May 05 '11 at 00:33
  • @TimPerry, I don't think that follows. What if you did one `mod` in your whole program and saw that same improvement? – luqui Jan 11 '13 at 17:55
  • 1
    I stated that when I changed calls in my primes code from mod to rem and I saw a 20% speedup. It is not a theoretical comment. It was a description of an event. I only changed one thing (albeit multiple places) and I saw a 20% speedup. It seems a 20% speedup *DID* follow. – Tim Perry Jan 11 '13 at 22:30
  • @TimPerry ah I thought "the quicker version" referred to `rem`, not your modified program. (Not sure why I thought you wouldn't just say `rem` if that's what you meant though...) – luqui Jan 11 '13 at 22:32
  • 3
    On many architectures including x86, when dividing by non-constants, using truncate-toward-zero division is slightly faster than than floor-toward-negative-infinity, but when dividing by many constant values, especially powers of two, truncate-toward-zero is much faster (e.g. one instruction versus 3). I would posit that code which is speed-sensitive is apt to have more "fast" divisions in it than slow ones. – supercat Nov 13 '13 at 20:53
7

A simple example where it would matter is testing if an integer is even or odd.

let buggyOdd x = x `rem` 2 == 1
buggyOdd 1 // True
buggyOdd (-1) // False (wrong!)

let odd x = x `mod` 2 == 1
odd 1 // True
odd (-1) // True

Note, of course, you could avoid thinking about these issues by just defining odd in this way:

let odd x = x `rem` 2 /= 0
odd 1 // True
odd (-1) // True

In general, just remember that, for y > 0, x mod y always return something >= 0 while x rem y returns 0 or something of the same sign as x.

namin
  • 37,139
  • 8
  • 58
  • 74