70

I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.

What's the reason for this?

Sebastian Krysmanski
  • 8,114
  • 10
  • 49
  • 91
Joan Venge
  • 315,713
  • 212
  • 479
  • 689

7 Answers7

75

UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.

dtb
  • 213,145
  • 36
  • 401
  • 431
  • 4
    But why would this affect whether a person codes in a particular language with Int32? If language X doesn't accept it, but language Y does, why would I care about X if I'm programming in Y? Is it just for examples which are implemented in many languages to show off features, or is there some other reason why I should program in Y with the restrictions in X? – Adam Davis Apr 23 '09 at 17:14
  • 11
    The idea of the Common Language Specification is that you can have a single Base Class Library for all languages. If the BCL only uses CLS compliant features and all languages support all CLS compliant features, then the BCL can be used from all languages. The same applies to your class libraries if you want them to be usable from other languages than it was written in. – dtb Apr 23 '09 at 17:24
  • 7
    @Adam: there's nothing that says that your code must conform to only what the BCL provides. Howewer, the BCL is intended to be consumable by any CLS compliant assembly, so the BCL must restrict itself to CLS features. – Michael Burr Apr 23 '09 at 17:45
9

int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.

GEOCHET
  • 21,119
  • 15
  • 74
  • 98
Adam Davis
  • 91,931
  • 60
  • 264
  • 330
  • 1
    Thanks Adam, I actually need the extra range of the uint, but do you think using it would degrade my app's performance? – Joan Venge Apr 23 '09 at 17:00
  • 4
    In modern processors no, unsigned int and signed int should be about the same. If there's a difference it's marginal, and if you need every last ounce of performance it's worth benchmarking for your particular compiler/OS/CPU/etc. – Adam Davis Apr 23 '09 at 17:01
  • 7
    No, it would not change the performance of your program. To get answers like this, a microbenchmark is your friend (I recommend MeasureIt). *On my computer* adding uints seems to perform about 20% faster than ints (although measurements this small aren't too reliable). In general worrying about this would be considered optimizing prematurely; use whatever suits the situation best and optimize it later if it causes a _measurable_ performance impact. – Eric Burnett Apr 23 '09 at 20:39
  • @AdamDavis: The smallest type that can hold the arithmetical difference between any two arbitrary `Int32` values of the same sign is an `Int32`. To hold the difference between two arbitrary `UInt32` values [which would naturally have the same "sign"] would require an `Int64`. IMHO, .NET might have benefited from a `UInt31` type which could be implicitly cast to `Int32`. – supercat May 08 '15 at 20:10
  • This isn't about C. It's about C#. – S.S. Anne Sep 10 '19 at 11:50
  • @JL2210 Correct. This answer applies to all modern processors that might run a dotnet application, and the answer is the same for both. – Adam Davis Sep 13 '19 at 13:38
5

Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.

While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.

supercat
  • 77,689
  • 9
  • 166
  • 211
3

Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.

Max Schmeling
  • 12,363
  • 14
  • 66
  • 109
  • 3
    Those are fine, but .Count doesn't make sense with negative values. – Joan Venge Apr 23 '09 at 18:15
  • 6
    By the way, those are not fine, really) The reason of why SelectedIndex returns -1 is that first .NET version had no nullable values. It's highly recommended to return null if something was not found or chosen. – EngineerSpock May 28 '13 at 09:24
3

UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx

I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?

  • If anything, I would make Count a UInt16 - which needs half of the space, but you can still have lists as big as with Int32... – Philipp M Mar 06 '13 at 11:22
  • 4
    UInt16 not equals Int32. Because positive ints of Int32 are from the range of Int31 (not from Int16). – EngineerSpock May 28 '13 at 09:44
2

If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.

The main reasons being that:

  • It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
  • It eliminates possible truncation warnings.
  • You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.

Are just a few quick thoughts that came to mind.

I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.

Wai Ha Lee
  • 8,598
  • 83
  • 57
  • 92
Dunk
  • 1,704
  • 1
  • 14
  • 19
0

Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.

Daniel A. White
  • 187,200
  • 47
  • 362
  • 445