Just noticed this on OSX and I found it curious as I expected long to be bigger than int. Is there any good reason for making them the same size?
5 Answers
This is a result of the loose nature of size definitions in the C and C++ language specifications. I believe C has specific minimum sizes, but the only rule in C++ is this:
1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
Moreover, sizeof(int)
and sizeof(long)
are not the same size on all platforms. Every 64-bit platform I've worked with has had long
fit the natural word size, so 32 bits on a 32-bit architecture, and 64 bits on a 64-bit architecture.

- 10,689
- 4
- 41
- 50
-
3Afaik windows 64-bit is an exception. It is LLP64, so there long=32-bit and longlong 64-bit. It depends on what you think is more important as implementor, keeping sizeof(long)=sizeof(pointer) or rescue code that assumes sizeof(long)=sizeof(int). MS chose for the last. – Marco van de Voort May 03 '09 at 16:06
-
Marco van de Voort makes a good point. Going back in time, on 16-bit machines, int was typically the same size as short, and long was the new, big 32-bit size. So, with a big enough perspective, there is no fixed rule about what int is the same size as - there were machines with 16-bit short, 32-bit int and 64-bit long values, I believe, though they were not very popular because most code assumed (sizeof int == sizeof short || sizeof int == sizeof long) && sizeof short != sizeof long. – Jonathan Leffler May 03 '09 at 16:47
int
is essentially the most convenient and efficient integer typelong
is/was the largest integer typeshort
is the smallest integer type
If the longest integer type is also the most efficient, the int
is the same as long
. A while ago (think pre-32 bit), sizeof(int) == sizeof(short)
on a number of platforms since 16-bit was the widest natural integer.

- 58,213
- 10
- 98
- 113
-
1char is an integer type, and it's generally smaller than short. – Bastien Léonard May 03 '09 at 16:26
-
char is a little strange... by itself, it has implementation defined signed'ness so it is a weird type of integer. Consider "char c = -1; int i = c; unsigned u = c;". The values of i and u are implementation defined. I would agree that both "unsigned char" and "signed char" are integer types though ;) – D.Shawley May 03 '09 at 17:15
-
long is still 32-bit on 64-bit systems, so it is not the largest integer type, just like short isn't the smallest. :) – jalf May 03 '09 at 20:07
-
According to the standard, bool is the smallest integer type (3.9.1:7) ;-) – Steve Jessop May 03 '09 at 21:10
-
@onebyone: I have rechecked the standard (specifically 3.9.1) and it does not state anywhere that sizeof(bool) < sizeof(char), they are never compared. It's requirements for size are the smallest: able to store 'true' or 'false' only, but implementations can define bool to be much bigger (Apple PPC systems have a 4 byte wide bool type) – David Rodríguez - dribeas May 04 '09 at 23:19
-
Agreed, I meant smallest by range, not smallest storage. No object can have a smaller storage than char, unless you count the empty base class optimisation. – Steve Jessop May 04 '09 at 23:43
-
@jalf - `unsigned long` is still the largest integral type in ISO C++. Support for _ULonglong isn't added until TR1 gets approval. That is what I was hinting at with the "is/was" thing BTW. Another note of interest is that there is no guarantee that `short`, `int`, or even `long` aren't all the same size as `char`. Of course, this would require that `char` be able to represent the full 32-bit range as required by C89's Numerical limits clause. Not that anyone would consider such a thing ;) – D.Shawley May 05 '09 at 02:05
int is supposed to be the natural word size of the architecture. In the old days, on 16 bit machines like the original IBM PC, ints were 16 bits and longs were 32 bits. On 32 bit machines like the 68000 series, ints were still "the natural word size", which was now 32 bits, and longs remained at 32 bits. Over time, longs grew to be 64 bits, and then we started using 64 bit architectures like the Intel Core 2, and so I expect int to grow to 64 bits sooner or later.
Interesting fact: On my laptop, with a Core 2 Duo and Mac OS X 10.5, int and long are both 32 bits. On my Linux box, also with a Core 2 Duo and Ubuntu, int is 32 bits and long is 64 bits.
Years ago, I was asked in a job interview where an int pointer would be after you added 3 to it. I answered "3 time sizeof(int) past where it is now". The interviewer pressed me, and I said it would depend on the architecture, since (at that time) Windows used 16 bit ints but since I was doing Unix programming I was more used to 32 bit ints. I didn't get the job - I suspect the interviewer didn't like the fact that I knew more than him.

- 179,021
- 58
- 319
- 408
-
1I don't think int is 64 bits on a core2. I think it's just long that's grown. At least, that has been my experience on 64-bit Debian. – Tom May 03 '09 at 15:50
-
You're right, Tom, both int and long are 4 bytes on my MacBookPro. – Paul Tomblin May 03 '09 at 15:54
-
1It is not hardware but rather the compiler who chooses. The exact same hardware can have different int and long sizes depending on the OS and the compiler. GCC in macosx on an intel core 2 duo will by default generate 32 bit int/long. With -m64 option will generate 32b int and 64b long (and pointer). – David Rodríguez - dribeas May 03 '09 at 16:33
-
Right, dribeas. It's likely that on Paul's Ubuntu box, gcc is compiled with -m64 as the default, whereas on the OSX box -m32 is the default. – Tom May 03 '09 at 16:59
As Tom correctly pointed, the only standard size in C++ is char, whose size is 1(*). From there on, only a 'not smaller than' relation holds between types. Most people will claim that it depends on the architecture, but it is more of a compiler/OS decision. The same hardware running MacOSX, Windows (32/64 bits) or Linux (32/64) will have different sizes for the same data types. Different compilers in the same architecture and OS can have different sizes. Even the exact same compiler on the same OS on the same hardware can have different sizes depending on compilation flags:
$ cat test.cpp
#include <iostream>
int main()
{
std::cout << "sizeof(int): " << sizeof(int) << std::endl;
std::cout << "sizeof(long): " << sizeof(long) << std::endl;
}
$ g++ -o test32 test.cpp; ./test32
sizeof(int): 4
sizeof(long): 4
$ g++ -o test64 test.cpp -m64; ./test64
sizeof(int): 4
sizeof(long): 8
That is the result of using gcc compiler on MacOSX Leopard. As you can see the hardware and software is the same and yet sizes do differ on two executables born out of the same code.
If your code depends on sizes, then you are better off not using the default types but specific types for your compiler that make size explicit. Or using some portable libraries that offer that support, as an example with ACE: ACE_UINT64 will be an unsigned integer type of 64 bits, regardless of the compiler/os/architecture. The library will detect the compiler and environment and use the appropriate data type on each platform.
(*) I have rechecked the C++ standard 3.9.1: char size shall be 'large enough to store any member of the implementation's basic character set'. Later in: 5.3.3: sizeof(char), sizeof(signed char) and sizeof(unsigned char) are 1, so yes, size of a char is 1 byte.
After reading other answers I found one that states that bool is the smallest integer type. Again, the standard is loose in the requirements and only states that it can represent true and false but not it's size. The standard is explicit to that extent: 5.3.3, footnote: "sizeof(bool) is not required to be 1".
Note that some C++ implementations have decided to use bools larger than 1 byte for other reasons. In Apple MacOSX PPC systems with gcc, sizeof(bool)==4
.

- 204,818
- 23
- 294
- 489
-
1sizeof(char) is always 1. HOWEVER, sizeof returns results in number of chars (not octets). CHAR_BIT could be e.g. 32. – Logan Capaldo May 04 '09 at 23:51
-
According to 5.3.3: "The sizeof operator yields the number of bytes in the object representation of its operand." – David Rodríguez - dribeas May 05 '09 at 06:00
-
1Note however that 'bytes' as far as C or C++ standards are concerned, need not be octets. This is why there's a CHAR_BIT macro to give the actual number of bits in a byte (which is guarenteed to be at least 8) – bdonlan May 05 '09 at 06:10
int
and long
are not always the same size, so do not assume that they are in code. Historically there have been 8 bit and 16 bit, as well as the more familiar 32 bit and 64 bit architectures. For embedded systems smaller word sizes are still common. Search the net for ILP32 and LP64 for way too much info.

- 24,196
- 7
- 44
- 55