Rationale
The other answers are ignorant and/or misleading.
Prologue
It can help to understand this by first reading my answer on How do ValueTypes derive from Object (ReferenceType) and still be ValueTypes?
So what's going on?
System.Int32
is a struct that contains a 32-bit signed integer. It does not contain itself.
To reference a value type in IL, the syntax is valuetype [assembly]Namespace.TypeName
.
II.7.2 Built-in types
The CLI built-in types have corresponding value types defined in the Base Class Library. They shall be referenced in signatures only using their special encodings (i.e., not using the general purpose valuetype TypeReference syntax). Partition I specifies the built-in types.
This means that, if you have a method that takes a 32-bit integer, you mustn't use the conventional valuetype [mscorlib]System.Int32
syntax, but the special encoding for the 32-bit signed built-in integer int32
.
In C#, this means, that whether you type System.Int32
or int
, either will be compiled to int32
, not valuetype [mscorlib]System.Int32
.
You may have heard that int
is an alias for System.Int32
in C#, but in reality, both are aliases of the built-in CLS value type int32
.
So while a struct like
public struct MyStruct
{
internal MyStruct m_value;
}
Indeed would compile to (and thus be invalid):
.class public sequential ansi sealed beforefieldinit MyStruct extends [mscorlib]System.ValueType
{
.field assembly valuetype MyStruct m_value;
}
namespace System
{
public struct Int32
{
internal int m_value;
}
}
Instead compiles to (ignoring interfaces):
.class public sequential ansi sealed beforefieldinit System.Int32 extends [mscorlib]System.ValueType
{
.field assembly int32 m_value;
}
The C# compiler does not need a special case to compile System.Int32
, because the CLI specification stipulates that all references to System.Int32
are replaced with the special encoding for the built-in CLS value type int32
.
Ergo. System.Int32
is a struct that doesn't contain another System.Int32
but an int32
. In IL, you can have 2 method overloads, one taking a System.Int32
and another taking an int32
and have them co-exist:
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89)
.ver 2:0:0:0
}
.assembly test {}
.module test.dll
.imagebase 0x00400000
.file alignment 0x00000200
.stackreserve 0x00100000
.subsystem 0x0003
.corflags 0x00000001
.class MyNamespace.Program
{
.method static void Main() cil managed
{
.entrypoint
ldc.i4.5
call int32 MyNamespace.Program::Lol(valuetype [mscorlib]System.Int32) // Call the one taking the System.Int32 type.
call int32 MyNamespace.Program::Lol(int32) // Call the overload taking the built in int32 type.
call void [mscorlib]System.Console::Write(int32)
call valuetype [mscorlib]System.ConsoleKeyInfo [mscorlib]System.Console::ReadKey()
pop
ret
}
.method static int32 Lol(valuetype [mscorlib]System.Int32 x) cil managed
{
ldarg.0
ldc.i4.1
add
ret
}
.method static int32 Lol(int32 x) cil managed
{
ldarg.0
ldc.i4.1
add
ret
}
}
Decompilers like ILSpy, dnSpy, .NET Reflector, etc. can be misleading. They (at the time of writing) will decompile both int32
and System.Int32
as either the C# keyword int
or the type System.Int32
, because that's how we define integers in C#.
But int32
is the built-in value type for 32-bit signed integers (i.e. the VES has direct support for these types, with instructions like add
, sub
, ldc.i4.x
, etc.); System.Int32
is the corresponding value type defined in the class library.
The corresponding System.Int32
type is used for boxing, and for methods like ToString()
, CompareTo()
, etc.
If you write a program in pure IL, you can absolutely make your own value type that contains an int32
in exactly the same way, where you're still using int32
but call methods on the custom "corresponding" value type.
.class MyNamespace.Program
{
.method hidebysig static void Main(string[] args) cil managed
{
.entrypoint
.maxstack 8
ldc.i4.0
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ldc.i4.m1 // -1
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ldc.i4.3
call void MyNamespace.Program::PrintWhetherGreaterThanZero(int32)
ret
}
.method private hidebysig static void PrintWhetherGreaterThanZero(int32 'value') cil managed noinlining
{
.maxstack 8
ldarga 0
call instance bool MyCoolInt32::IsGreaterThanZero()
brfalse.s printOtherMessage
ldstr "Value is greater than zero"
call void [mscorlib]System.Console::WriteLine(string)
ret
printOtherMessage:
ldstr "Value is not greater than zero"
call void [mscorlib]System.Console::WriteLine(string)
ret
}
}
.class public MyCoolInt32 extends [mscorlib]System.ValueType
{
.field assembly int32 myCoolIntsValue;
.method public hidebysig bool IsGreaterThanZero()
{
.maxstack 8
ldarg.0
ldind.i4
ldc.i4.0
bgt.s isNonZero
ldc.i4.0
ret
isNonZero:
ldc.i4.1
ret
}
}
This is no different from the System.Int32
type, except, that the C# compiler doesn't consider MyCoolInt32
the corresponding int32
type, but to the CLR, it doesn't matter. This will fail PEVerify.exe, however, but it'll run just fine.
Decompilers will show casts and apparent pointer dereferences when decompiling the above, because they don't consider MyCoolInt32
and int32
related either.
But functionally, there's no difference, and there's no magic going on behind the scenes in the CLR.