2

Is it possible that for two positive integers i and j, (-i)/j is not equal to -(i/j) ? I can't figure out if this is possible...i thought it would be something regarding bits, or overflow of a char type or something but i can't find it. Any ideas?

syk435
  • 308
  • 1
  • 4
  • 10
  • It could be me, but in `(-i)/j`, `i` looks suspiciously **not**-positive. – WhozCraig Feb 14 '13 at 05:26
  • 1
    If using signed integers, this wont happen with positive integers. The only funny bit could be when using INT_MIN (least negative integer) and negating on that. – leppie Feb 14 '13 at 05:26
  • 2
    @WhozCraig The positive integers are `i` and `j`. In the example `i` is negated. – Hunter McMillen Feb 14 '13 at 05:26
  • @HunterMcMillen Yeah, I know. it was the immediate irony of the presentation which I was mulling. And I concur with leppie. This falls on its face when `i = INT_MIN`, but outside of that, I can't see it ever happening, or I'm just not far enough out of the box. – WhozCraig Feb 14 '13 at 05:32
  • 1
    Could be a problem with integers that don't fit into signed type. For example with `i = 1 << 31` and `j = 2` I get different results. – aragaer Feb 14 '13 at 06:32
  • @aragaer On my rig, anyway, that would be `INT_MIN` on 32-bit signed integer (sign bit lit, nothing else), which falls into leppie's assessment of the inability to compliment INT_MIN to a positive number, and therefore its fail for that one value. – WhozCraig Feb 14 '13 at 06:41

2 Answers2

12

Pre-C99, it's possible because division of negative operands is implementation-defined; it can be algebraic division or round-towards-zero. C99 defines it to round-towards-zero.

For example, C89 allows (-1)/2 == -1, while C99 requires (-1)/2 == 0. In all cases, -(1/2) == 0.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
3

It is indeed possible when using unsigned integers to represent i and j (you said positive integers, right? :P).

For instance, the output of the following program is (-i)/j 2147483647 -(i/j) 0 in my Intel 64bit OSX machine (unsigned ints are 32 bits long)

#include <stdio.h>


int main()
{
  unsigned int i = 1;
  unsigned int j = 2;
  printf("(-i)/j %u -(i/j) %u\n", (-i)/j, -(i/j));

  return 0;
}

Looking at the assembler, the negl intel instruction is used to compute the the negation. negl performs a twos complement. The twos complement result, when interpreted as an unsigned value, causes the mismatch. But don't simply take my word for it, here's an example:

For instance, assuming 8bit words, and for i=1 and j=2

In binary form: i=00000001 j=00000010

-(i/j) = twos_complement(00000001/00000010) = twos_complement(00000000) = 00000000

-(i)/j = twos_complement(00000001)/00000010 = 11111111 / 00000001 = 1111111 (127 in decimal)

The mismatch triggered by an unsigned representation even happens when using a C99 compiler. As @R states, another mismatch could also happen in a pre-c99 compiler since the division by a negative number was implementation defined.

Community
  • 1
  • 1
fons
  • 4,905
  • 4
  • 29
  • 49
  • Use of the phrase "twos complement" is a bit misleading since unary negation of an unsigned argument is not "twos complement", just reduction modulo `UINT_MAX+1`. But nice catch. I hadn't considered the case that OP might be concerned about unsigned types. – R.. GitHub STOP HELPING ICE Feb 14 '13 at 07:03