I have a question of curiosity.
How is Int64
implemented for 32 bit processors? I can see three options
- 32 bit processors have some kind of extension allowing 64 bit arithmetic
Int64
is it implemented entirely in the CLR, which implements 64 bit arithmetics by using 32 bit operations- Some 32 bit processors can do 64 bit arithmetic and CLR use this logic. On other processors CLR falls back to it's own implementation of 64 bit logic.
First scenario doesn't look plausible, because it implies that 64 bit processors now should be capable of doing 128 bit arithmetic. The second raises a question on how it can be done efficiently. The third looks like to unite the worst features of both approaches.