You've mentioned both C++ and ASP.NET. The two are very different.
As far as the C and C++ specifications are concerned, the only thing you know about a primitive data type is the maximal range of values it can store. Prepare for your first surprise - int
corresponds to a range of [-32767; 32767]. Most people today think that int
is a 32-bit number, but it's really only guaranteed to be able to store the equivallent of a 16-bit number, almost. Also note that the range isn't the more typical [-32768; 32767], because C was designed as a common abstract machine for a wide range of platforms, including platforms that didn't use 2's complement for their negative numbers.
It shouldn't therefore be surprising that long
is actually a "sort-of-32-bit" data type. This doesn't mean that C++ implementations on Linux (which commonly use a 64-bit number for long
) are wrong, but it does mean that C++ applications written for Linux that assume that long
is 64-bit are wrong. This is a lot of fun when porting C++ applications to Windows, of course.
The standard 64-bittish integer type to use is long long
, and that is the standard way of declaring a 64-bittish integer on Windows.
However, .NET cares about no such things, because it is built from the ground up on its own specification - in part exactly because of how history-laden C and C++ are. In .NET, int
is a 32-bit integer, and long
is a 64-bit integer, and long
is always bigger than int
. In C, if you used long
(32-bittish) and stored a value like ten trillion in there, there was a chance it would work, since it's possible that your long
was actually a 64-bit number, and C didn't care about the distinction - that's exactly what happens on most Linux C and C++ compilers. Since the types are defined like this for performance reasons, it's perfectly legal for the compiler to use a 32-bit data type to store a 8-bit value (keep that in mind when you're "optimizing for performance" - the compiler is doing optimizations of its own). .NET can still run on platforms that don't have e.g. 32-bit 2's complement integers, but the runtime must ensure that the type can hold as much as a 32-bit 2's complement integer, even if that means taking the next bigger type ("wasting" twice as much memory, usually).