This is a curiosity I've been having for a little while now that is in two parts.
- How, in terms of the stack, are two integers added in c#?
I understand this depends on the type of integer. For instance, if two Int16 numbers are added in c#, a type conversion takes place, and both integers are converted to Int32 before adding. This is why one must cast the sum of the two Int16 numbers to Int16 if that is what is desired like so:
Int16 c = (Int16)(a + b)
This type conversion is supposedly done to prevent overflow from summation, but it begs a question. Is it a requirement for both integers to be at least 4 bytes long in order for the stack to complete the addition operation? To me, if the system is of type System32, that would make sense. What throws me off about it is that a type conversion is not completed on two Int64 numbers. Two Int64 numbers may be added without an explicit cast to type Int64. Obviously, the risk of overflow is minimal for two Int64 numbers, but are both numbers completely added to the stack before addition? I suppose, generally speaking, what does the stack do to operate on two integers? Is there a minimal byte requirement to be added to the stack?
- Is it possible to view the operator definitions for the integer types in c#?
When making a custom struct in c#, one may define a custom operation for each of the operators (+,-,*,/,etc.). However, I've tried searching for the operator definitions for pre-defined structs in c# and have been unable to find them. I understand that the addition and subtraction operations go down to the CPU level and are intrinsic to the algorithmic logic unit. However, operations such as multiplication and division are not usually intrinsic from what I have seen, and thus, are defined via algorithms at the software level. Therefore, is it possible to view these definitions for pre-defined c# structs? If so, is it also possible to modify them? The reason being, I would like to have two Int16 numbers operate without any casting. I understand there is a risk of overflow, but the casting is limiting the speed of a program I am writing.
P.S. If there is any literature someone could point me to on this subject as well, I would greatly appreciate it.
I have tried creating a custom struct to define my own custom operators. However, I have been unable to prevent casting with addition of two Int16 numbers.
EDIT:
It seems the reason for the cast is because of how the stack manages the integers. As per the explanation in the Common Language Infrastructure, "The CLI only operates on the numeric types int32 (4-byte signed integers), int64 (8-byte signed integers), native int (native-size integers), and F (native-size floating-point numbers)... The shortest value actually stored on the stack is a 4-byte integer...." In other words, it is a requirement for both integers to be at least 4 bytes in length in order to operate.