I had a theory that casting from long
to int
with a value greater than int.MaxValue
or less than int.MinValue
would result in an exception. The issue I had at the time was identifying which type of exception was going to be thrown. I naturally assumed it would be an OverflowException
or an InvalidCastException
, but imagine my surprise when the following snippet didn't explode at all!
int x = 0;
long y = int.MaxValue;
y += 3;
try {
x = (int)y;
} catch (Exception e) {
Console.WriteLine(e.GetType());
}
This instead resulted in a value equal to int.MinValue + 2
:
x: -2147483646
int.MinValue: -2147483648
So my curiosity has the best of me at this point, and I've done a fair amount of digging to learn that I can cause an exception to be thrown (thanks to this post) by utilizing the checked
keyword (emphasis mine):
Overflow checking can be enabled by compiler options, environment configuration, or use of the checked keyword. The following examples ... Both examples raise an overflow exception.
Meaning, if I simply utilize the checked
keyword, an OverflowException
will be thrown (and it is):
try {
x = checked((int)y);
} catch (Exception e) {
Console.WriteLine(e.GetType());
}
After reading this article on MSDN, I've learned that the default behavior is to allow the overflow to occur:
By default, these non-constant expressions are not checked for overflow at run time either, and they do not raise overflow exceptions. The previous example displays -2,147,483,639 as the sum of two positive integers.
TL/DR: Why is the default behavior in C#
to allow overflow to occur? Doesn't this lead to unexpected behaviors?
Note: I understand that the bit flips and that's what causes it to go negative; I'm just curious as to why it's allowed to flip instead of throwing an exception by default. In a nutshell, why do we have to explicitly state we want things to explode?