47

Considering that the following statements return 4, what is the difference between the int and long types in C++?

sizeof(int)
sizeof(long)
Pooven
  • 1,744
  • 1
  • 25
  • 44
Alex
  • 43,191
  • 44
  • 96
  • 127
  • 2
    Asked and answered: http://stackoverflow.com/questions/271076/what-is-the-difference-between-an-int-and-a-long-in-c/271132 – Martin York May 23 '09 at 00:48
  • On 16 bit machines an int is 2 bytes. while a long remains 4 bytes. int used to be defined as the maximum integer size that a machine can take in one step. I say used to be as on Windows 64 it remained 4 bytes. – rxantos May 02 '22 at 16:26

6 Answers6

53

From this reference:

An int was originally intended to be the "natural" word size of the processor. Many modern processors can handle different word sizes with equal ease.

Also, this bit:

On many (but not all) C and C++ implementations, a long is larger than an int. Today's most popular desktop platforms, such as Windows and Linux, run primarily on 32 bit processors and most compilers for these platforms use a 32 bit int which has the same size and representation as a long.

Assem
  • 11,574
  • 5
  • 59
  • 97
Paul Sonier
  • 38,903
  • 3
  • 77
  • 117
45

The guarantees the standard gives you go like this:

1 == sizeof(char) <= sizeof(short) <= sizeof (int) <= sizeof(long) <= sizeof(long long)

So it's perfectly valid for sizeof (int) and sizeof (long) to be equal, and many platforms choose to go with this approach. You will find some platforms where int is 32 bits, long is 64 bits, and long long is 128 bits, but it seems very common for sizeof (long) to be 4.

(Note that long long is recognized in C from C99 onwards, but was normally implemented as an extension in C++ prior to C++11.)

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Dan Olson
  • 22,849
  • 4
  • 42
  • 56
  • 1
    This is a useful bit of info. There are processors+compilers where sizeof(char) != sizeof(short). The details are getting a bit hazy now, but I believe that one processor I've worked on couldn't address less than 16 bits. The compiler had to do a lot of work to make sizeof(char) == 1. – Craig Lewis May 23 '09 at 00:20
  • @CraigLewis: you mean "...where `sizeof(char)==sizeof(short)`", correct? – Walter Tross Sep 04 '13 at 09:33
  • I believe some of the Cray machines had `sizeof(char) == sizeof(short) && sizeof(char) == sizeof(int) && sizeof(int) == 1` , and that size was at least 32 bits. – Jonathan Leffler Jan 18 '15 at 06:20
  • Cray machines, especially Cray SV1 were interesting. They had 8bit chars. Declaring a single char would occupy the whole 64 bit word and waste some space. For an array of chars or multiple chars in a struct the compiler would automatically pack the 8bit chars together in the 64bit words. char pointers were also bigger than other pointers, they had to store the address of the 64bit word, as well as where in the word the packed char was. – AliciaBytes Apr 19 '15 at 13:08
17

You're on a 32-bit machine or a 64-bit Windows machine. On my 64-bit machine (running a Unix-derivative O/S, not Windows), sizeof(int) == 4, but sizeof(long) == 8.

They're different types — sometimes the same size as each other, sometimes not.

(In the really old days, sizeof(int) == 2 and sizeof(long) == 4 — though that might have been the days before C++ existed, come to think of it. Still, technically, it is a legitimate configuration, albeit unusual outside of the embedded space, and quite possibly unusual even in the embedded space.)

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • 9
    Note that it depends on the compiler as well as the machine. In MSVC, sizeof(long) == 4 even on 64bit Windows. – Steve Jessop May 22 '09 at 22:33
  • Of course - but I'm not using 64-bit Windows machine. But yes, my statement "You're on a 32-bit machine" is too sweeping; it could be "You're on a 32-bit machine or a Windows machine" which would be pedantically accurate. – Jonathan Leffler May 22 '09 at 23:45
  • 8
    I don't think your first statement was inaccurate so much as a fair guess. I'm worried the second one might cause people to incorrectly think that your longs are 8 bytes as a necessary consequence of your machine being 64bit, when actually it's a circumstantial property of your compiler. – Steve Jessop May 23 '09 at 01:07
  • A lot of people are on 64 bit windows. That is a very large group of people to whom you gave the incorrect size of `sizeof(long)`, which is hardly pedantic. So I am with Steve. – Cookie May 22 '15 at 07:38
  • 1
    @Cookie: In 2009, very many fewer people were using 64-bit Windows. These days, very many more people are using 64-bit Windows. – Jonathan Leffler May 22 '15 at 07:40
4

On platforms where they both have the same size the answer is nothing. They both represent signed 4 byte values.

However you cannot depend on this being true. The size of long and int are not definitively defined by the standard. It's possible for compilers to give the types different sizes and hence break this assumption.

JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
3

The long must be at least the same size as an int, and possibly, but not necessarily, longer.

On common 32-bit systems, both int and long are 4-bytes/32-bits, and this is valid according to the C++ spec.

On other systems, both int and long long may be a different size. I used to work on a platform where int was 2-bytes, and long was 4-bytes.

abelenky
  • 63,815
  • 23
  • 109
  • 159
2

A typical best practice is not using long/int/short directly. Instead, according to specification of compilers and OS, wrap them into a header file to ensure they hold exactly the amount of bits that you want. Then use int8/int16/int32 instead of long/int/short. For example, on 32bit Linux, you could define a header like this

typedef char int8;
typedef short int16;
typedef int int32;
typedef unsigned int uint32;
fishbone
  • 3,140
  • 2
  • 37
  • 50
fwlx
  • 31
  • 4
  • 5
    There is no need to define them; they are already included in a Standard header file called ``, see here: http://en.cppreference.com/w/cpp/types/integer – jogojapan May 28 '13 at 02:30
  • @jogojapan yeah, we should always prefer standard definitions if possible. But if the code needs enough compatibility to compile on some old OS/compiler, then coding such stuff manually will ensure it. Every option has some goods and bads :) – fwlx May 29 '13 at 08:59