Because the C# Specification 2.4.4.2 dictates that if a number literal has no decimal or suffix, it's a the smallest of int
, uint
, long
, or ulong
that can contain it. Naturally 10
fits Int32
so it is chosen.
The type of an integer literal is determined as follows:
- If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
The C# language designers decided that even though 10
can be fit in an Int16
, usage of that variable type would be relatively rare; that Int32
usage would be the "status-quo".
Actually, if memory serves, there is no Int16
literal in C# anyway. You'd have to explicitly declare and assign a value to an Int16
to get one.
Interestingly, I looked at the compiled IL code and declaring a short s = 10
and int i = 10
actually generate the exact same IL... so now I'm wondering how shorts are managed; perhaps they are actually managed as 32 bit values in the CLI anyway. I'd love to find out if this is the case.