24

The expression int.Minvalue / -1 results in implementation defined behavior according to the C# specification:

7.8.2 Division operator

If the left operand is the smallest representable int or long value and the right operand is –1, an overflow occurs. In a checked context, this causes a System.ArithmeticException (or a subclass thereof) to be thrown. In an unchecked context, it is implementation-defined as to whether a System.ArithmeticException (or a subclass thereof) is thrown or the overflow goes unreported with the resulting value being that of the left operand.

Test program:

var x = int.MinValue;
var y = -1;
Console.WriteLine(unchecked(x / y));

This throws an OverflowException on .NET 4.5 32bit, but it does not have to.

Why does the specification leave the outcome implementation-defined? Here's the case against doing that:

  1. The x86 idiv instruction always results in an exception in this case.
  2. On other platforms a runtime check might be necessary to emulate this. But the cost of that check would be low compared to the cost of the division. Integer division is extremely expensive (15-30 cycles).
  3. This opens compatibility risks ("write once run nowhere").
  4. Developer surprise.

Also interesting is the fact, that if x / y is a compiletime constant we indeed get unchecked(int.MinValue / -1) == int.MinValue:

Console.WriteLine(unchecked(int.MinValue / -1)); //-2147483648

This means that x / y can have different behaviors depending on the syntactic form being used (and not only depending on the values of x and y). This is allowed by the specification but it seems like an unwise choice. Why was C# designed like this?

A similar question points out where in the specification this exact behavior is prescribed but it does not (sufficiently) answer why the language was designed this way. Alternative choices are not discussed.

Community
  • 1
  • 1
usr
  • 168,620
  • 35
  • 240
  • 369
  • The comment by dimitry on [this answer](http://stackoverflow.com/a/26595091/73226) indicates the spec didn't always read that way. – Martin Smith Aug 02 '15 at 18:23
  • @HenkHolterman this question is about language design. The suggested duplicate does not answer this. (It's kind of awkward to reopen my own question.) – usr Aug 02 '15 at 18:36
  • 2
    The _why_ part is sufficiently answered in the answers on the dupe. When you think you have a more specific question, you should link to the original and spell out the differences. – H H Aug 02 '15 at 18:36
  • @HenkHolterman will do that now. – usr Aug 02 '15 at 18:36
  • @HenkHolterman yes, he explains the x86 situation which I'm aware of. But why is this implementation defined? I do not object to an exception being thrown. I wonder why alternative behavior is even allowed. – usr Aug 02 '15 at 18:39
  • @HenkHolterman I'd propose to define that an exception is always thrown. That seems to solve everything. – usr Aug 02 '15 at 18:54

2 Answers2

10

This is a side-effect of the C# Language Specification's bigger brother, Ecma-335, the Common Language Infrastructure specification. Section III, chapter 3.31 describes what the DIV opcode does. A spec that the C# spec very often has to defer to, pretty inevitable. It specifies that it may throw but does not demand it.

Otherwise a realistic assessment of what real processors do. And the one that everybody uses is the weird one. Intel processors are excessively quirky about overflow behavior, they were designed back in the 1970s with the assumption that everybody would use the INTO instruction. Nobody does, a story for another day. It doesn't ignore overflow on an IDIV however and raises the #DE trap, can't ignore that loud bang.

Pretty tough to write a language spec on top of a woolly runtime spec on top of inconsistent processor behavior. Little that the C# team could do with that but forward the imprecise language. They already went beyond the spec by documenting OverflowException instead of ArithmeticException. Very naughty. They had a peek.

A peek that revealed the practice. It is very unlikely to be a problem, the jitter decides whether or not to inline. And the non-inlined version throws, expectation is that the inlined version does as well. Nobody has been disappointed yet.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • ECMA mandates an exception. That would clear the way for C# to also mandate it. Why do they allow the non-throwing case although `div` never overflows? – usr Aug 03 '15 at 09:01
  • It doesn't, it only says that it *may* throw an ArithmeticException. Mandated behavior in the CLI spec is worded very differently. You can read it anyway you like, but you know how the C# team read it :) – Hans Passant Aug 03 '15 at 09:06
  • It says in "III.3.31": "Integral operations throw System.ArithmeticException if the result cannot be represented in the result type. (This can happen if value1 is the smallest representable integer value, and value2 is -1.) ". There is no other case. Is this not the right place in the document to look? The C# spec seems like a mistake or historic accident to me at this point. – usr Aug 03 '15 at 09:47
  • 1
    It is the right place. And it indeed does not say "This *will* happen". – Hans Passant Aug 03 '15 at 09:53
7

A principal design goal of C# is reputedly "Law of Minimum Surprise". According to this guideline the compiler should not attempt to guess the programmer's intent, but rather should signal to the programmer that additional guidance is needed to properly specify intent. This applies to the case of interest because, within the limitations of two's-complement arithmetic, the operation results in a very surprising result: Int32.MinValue / -1 evaluates to Int32.MinValue. An overflow has occurred and an unavailable 33'rd bit, of 0, would be required to properly represent the correct value of Int32.MaxValue + 1.

As expected, and noted in your quote, in a checked context an Exception is raised to alert the programmer to the failure to properly specify intent. In an unchecked context the implementation is allowed to either behave as in the checked context, or to allow the overflow and return the surprising result. There are certain contexts, such as bit-twiddling, in which it is convenient to work with signed int's but where the overflow behavious is actually expected and desired. By checking the implementation notes, the programmer can determine whether this behaviour is actually as expected.

Pieter Geerkens
  • 11,775
  • 2
  • 32
  • 52
  • 2
    It might be convenient to have unchecked division but it does not exist in practice on Microsoft .NET. That's because the overflow *only* happens when the operands are constant. If you need unchecked overflow there is no way to get it. – usr Aug 02 '15 at 18:32
  • @usr Doesn't the C# compiler use unchecked, whereas VB.Net uses checked by default? also see http://stackoverflow.com/a/13259794/57508 –  Aug 02 '15 at 18:50
  • 2
    @AndreasNiedermair run the first code snippet, it's unchecked and throws. (The 2nd one is unchecked and does not throw with the same inputs.) – usr Aug 02 '15 at 18:55