I would expect the following two implementations of MyMethod
to behave exactly the same. Do they? If not, this could already be my wrong assumption:
First:
public int MyMethod(int x)
{
try
{
return x + 8;
}
catch
{
throw;
}
}
Second:
public int MyMethod(int x)
{
return x + 8;
}
So, I would assume the compiler will optimize this out, i.e. remove the unnecessary try/catch block of the first implementation (when in Release Mode). As it turns out, it doesn't - here is the generated IL for the two code samples:
First:
.method public hidebysig instance int32 MyMethod(int32 x) cil managed
{
// Code size 11 (0xb)
.maxstack 2
.locals init ([0] int32 CS$1$0000)
.try
{
IL_0000: ldarg.1
IL_0001: ldc.i4.8
IL_0002: add
IL_0003: stloc.0
IL_0004: leave.s IL_0009
} // end .try
catch [mscorlib]System.Object
{
IL_0006: pop
IL_0007: rethrow
} // end handler
IL_0009: ldloc.0
IL_000a: ret
} // end of method MyClass::MyMethod
Second:
.method public hidebysig instance int32 MyMethod(int32 x) cil managed
{
// Code size 4 (0x4)
.maxstack 8
IL_0000: ldarg.1
IL_0001: ldc.i4.8
IL_0002: add
IL_0003: ret
} // end of method MyClass::MyMethod
Could someone please shed some light on this? Is there a relevant difference in behaviour of the two implementations (side effects?)? Could the compiler optimize the code, but just doesn't? Thanks!