1

Everyone knows this, int are smaller than long.

Behind this MSDN link, I'm reading the following :

INT_MIN (Minimum value for a variable of type int.)    –2147483648
INT_MAX (Maximum value for a variable of type int.)     2147483647

LONG_MIN (Minimum value for a variable of type long.)  –2147483648
LONG_MAX (Maximum value for a variable of type long.)   2147483647

The same information can be found here.

Have I been told a lie my whole life? What is the difference between int and long if not the values they can hold ? How come?

Gil Sand
  • 5,802
  • 5
  • 36
  • 78

4 Answers4

2

You've mentioned both C++ and ASP.NET. The two are very different.

As far as the C and C++ specifications are concerned, the only thing you know about a primitive data type is the maximal range of values it can store. Prepare for your first surprise - int corresponds to a range of [-32767; 32767]. Most people today think that int is a 32-bit number, but it's really only guaranteed to be able to store the equivallent of a 16-bit number, almost. Also note that the range isn't the more typical [-32768; 32767], because C was designed as a common abstract machine for a wide range of platforms, including platforms that didn't use 2's complement for their negative numbers.

It shouldn't therefore be surprising that long is actually a "sort-of-32-bit" data type. This doesn't mean that C++ implementations on Linux (which commonly use a 64-bit number for long) are wrong, but it does mean that C++ applications written for Linux that assume that long is 64-bit are wrong. This is a lot of fun when porting C++ applications to Windows, of course.

The standard 64-bittish integer type to use is long long, and that is the standard way of declaring a 64-bittish integer on Windows.

However, .NET cares about no such things, because it is built from the ground up on its own specification - in part exactly because of how history-laden C and C++ are. In .NET, int is a 32-bit integer, and long is a 64-bit integer, and long is always bigger than int. In C, if you used long (32-bittish) and stored a value like ten trillion in there, there was a chance it would work, since it's possible that your long was actually a 64-bit number, and C didn't care about the distinction - that's exactly what happens on most Linux C and C++ compilers. Since the types are defined like this for performance reasons, it's perfectly legal for the compiler to use a 32-bit data type to store a 8-bit value (keep that in mind when you're "optimizing for performance" - the compiler is doing optimizations of its own). .NET can still run on platforms that don't have e.g. 32-bit 2's complement integers, but the runtime must ensure that the type can hold as much as a 32-bit 2's complement integer, even if that means taking the next bigger type ("wasting" twice as much memory, usually).

Luaan
  • 62,244
  • 7
  • 97
  • 116
  • on 32-bit Linux `long` is also 32 bits – phuclv Jun 16 '16 at 08:51
  • @LưuVĩnhPhúc It ultimately depends on the compiler - I'm not sure which compilers do it in what way, but all the Linux-borne code I've worked with assumed `long` can hold a 64-bit value (on a 64-bit OS and CPU, that is). – Luaan Jun 16 '16 at 09:19
2

In C and C++ the requirements are that int can hold at least 16 bits, long can hold at least 32 bits, and int can not be larger than long. There is no requirement that int be smaller than long, although compilers often implement them that way. You haven't been told a lie, but you've been told an oversimplification.

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
  • Yeah, there's a thin line between "lie" and "oversimplification". Smart people lie by oversimplifying :P – Luaan Jun 16 '16 at 14:15
1

This is C++

On many (but not all) C and C++ implementations, a long is larger than an int. Today's most popular desktop platforms, such as Windows and Linux, run primarily on 32 bit processors and most compilers for these platforms use a 32 bit int which has the same size and representation as a long.

See the ref http://tsemba.org/c/inttypes.html

Suren Srapyan
  • 66,568
  • 14
  • 114
  • 112
1

No! Well! Its like, we had been told since childhood, that sun rises in the east and sets in the west. (the Sun doesn't move after all! )
In earlier processing environments, where we had 16 bit Operating Systems, an integer was considered to be of 16 bits(2 bytes), and a 'long' as 4 bytes (32 bits)

But, with the advent of 32 bit and 64 bit OS, an integer is said to consist of 32 bits(4 bytes) and a long to be 'atleast as big as an integer', hence, 32 bits again. Thereby explaining the equality between the maximum and minimum ranges 'int' and 'long' can take.

Hence, this depends entirely on the architecture of your system.