14

Have a look at this C code:

int main()
{
    unsigned int y = 10;
    int x = -2;
    if (x > y)
        printf("x is greater");
    else
        printf("y is greater");
    return 0;
}
/*Output: x is greater.*/ 

I understand why the output is x is greater, because when the computer compares both of them, x is promoted to an unsigned integer type. When x is promoted to unsigned integer, -2 becomes 65534 which is definitely greater than 10.

But why in C#, does the equivalent code give the opposite result?

public static void Main(String[] args)
{
    uint y = 10;
    int x = -2;
    if (x > y)
    {
        Console.WriteLine("x is greater");
    }
    else
    {
        Console.WriteLine("y is greater");
    }
}
//Output: y is greater. 
Ronan Boiteau
  • 9,608
  • 6
  • 34
  • 56
David Klempfner
  • 8,700
  • 20
  • 73
  • 153
  • 14
    Because C# is better? :D – lukegravitt Aug 06 '13 at 23:30
  • 1
    I just learned something new about C today. I must admit i didn't like it. Does the compiler, at least, issue an warning? – Luis Filipe Aug 06 '13 at 23:38
  • 2
    possible duplicate of [unsigned int (c++) vs uint (c#)](http://stackoverflow.com/questions/8266089/unsigned-int-c-vs-uint-c) – lukegravitt Aug 06 '13 at 23:45
  • 1
    By the way, `int` in C# is 32 bits. So -2, when converted to `uint`, is 4,294,967,294 – Jim Mischel Aug 06 '13 at 23:56
  • A better question is, why did the designers of C believe that quietly promoting signed types to unsigned is acceptable behavior for a language? That godawful decision has been *(and still is)* the cause of [countless security bugs](http://stackoverflow.com/questions/3259413/3261019#3261019). – BlueRaja - Danny Pflughoeft Aug 06 '13 at 23:58
  • 1
    @BlueRaja-DannyPflughoeft - perhaps we shouldn't lose sight of the fact that C is over 40 years old. There were 40 routers on the planet at the time. In those days the burden of competency was very much more on the side of the programmer - relying on a compiler to save you from yourself would have been a concept that was laughable. They didn't expect monkeys to be programming in those days. – J... Aug 07 '13 at 00:12
  • @BlueRaja-DannyPflughoeft (and others): If you compile with warnings on, you will get a warning. See http://stackoverflow.com/q/765709/56778 – Jim Mischel Aug 07 '13 at 00:16
  • 3
    I don't really understand why this gets anybody upset. It means that you as a programmer still have to be smarter than a machine. Which is a Good Thing, nobody pays a compiler a living wage. The way the C# compiler handles this is well documented, there are no surprises here. – Hans Passant Aug 07 '13 at 00:27
  • @LuisFilipe: Yes, major compilers (GCC, Clang, MSC) issue warnings. In some odd cases you will *need* to disable the warnings, such as when you need to compare an `off_t` against a `size_t` -- since there's no unsigned version of `off_t` available, the best option is to generate that type using the "usual arithmetic conversions". – Dietrich Epp Aug 07 '13 at 03:50

3 Answers3

20

In C#, both uint and int get promoted to a long before the comparison.

This is documented in 4.1.5 Integral types of the C# language spec:

For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <= operators, the operands are converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all possible values of both operands. The operation is then performed using the precision of type T, and the type of the result is T (or bool for the relational operators). It is not permitted for one operand to be of type long and the other to be of type ulong with the binary operators.

Since long is the first type that can fully represent all int and uint values, the variables are both converted to long, then compared.

Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373
9

In C#, in a comparison between an int and uint, both values are promoted to long values.

"Otherwise, if either operand is of type uint and the other operand is of type sbyte, short, or int, both operands are converted to type long."

http://msdn.microsoft.com/en-us/library/aa691330(v=vs.71).aspx

Chris Pak
  • 121
  • 6
2

C and C# have differing views for what integral types represent. See my answer https://stackoverflow.com/a/18796084/363751 for some discussion about C's view. In C#, whether integers represent numbers or members of an abstract algebraic ring is determined to some extent by whether "checked arithmetic" is turned on or off, but that simply controls whether out-of-bounds computation should throw exceptions. In general, the .NET framework regards all integer types as representing numbers, and aside from allowing some out-of-bounds computations to be performed without throwing exceptions C# follows its lead.

If unsigned types represent members of an algebraic ring, adding e.g. -5 to an unsigned 2 should yield an unsigned value which, when added to 5, will yield 2. If they represent numbers, then adding a -5 to an unsigned 2 should if possible yield a representation of the number -3. Since promoting the operands to Int64 will allow that to happen, that's what C# does.

Incidentally, I dislike the notions that operators (especially relational operators!) should always work by promoting their operands to a common compatible type, should return a result of that type, and should accept without squawking any combination of operators which can be promoted to a common type. Given float f; long l;, there are at least three sensible meanings for a comparison f==l [it could cast l to float, it could cast l and f to double, or it could ensure that f is a whole number which can be cast to long, and that when cast it equals l]. Alternatively, a compiler could simply reject such a mixed comparison. If I had by druthers, compilers would be enjoined from casting the operands to relational operators except in cases where there was only one plausible meaning. Requiring that things which are implicitly convertible everywhere must be directly comparable is IMHO unhelpful.

Community
  • 1
  • 1
supercat
  • 77,689
  • 9
  • 166
  • 211