Making int
as wide as possible is not the best choice. (The choice is made by the ABI designers.)
A 64bit architecture like x86-64 can efficiently operate on int64_t
, so it's natural for long
to be 64 bits. (Microsoft kept long
as 32bit in their x86-64 ABI, for various portability reasons that make sense given the existing codebases and APIs. This is basically irrelevant because portable code that actually cares about type sizes should be using int32_t
and int64_t
instead of making assumptions about int
and long
.)
Having int
be int32_t
actually makes for better, more efficient code in many cases. An array of int
use only 4B per element has only half the cache footprint of an array of int64_t
. Also, specific to x86-64, 32bit operand-size is the default, so 64bit instructions need an extra code byte for a REX prefix. So code density is better with 32bit (or 8bit) integers than with 16 or 64bit. (See the x86 wiki for links to docs / guides / learning resources.)
If a program requires 64bit integer types for correct operation, it won't use int
. (Storing a pointer in an int
instead of an intptr_t
is a bug, and we shouldn't make the ABI worse to accommodate broken code like that.) A programmer writing int
probably expected a 32bit type, since most platforms work that way. (The standard of course only guarantees 16bits).
Since there's no expectation that int
will be 64bit in general (e.g. on 32bit platforms), and making it 64bit will make some programs slower (and almost no programs faster), int
is 32bit in most 64bit ABIs.
Also, there needs to be a name for a 32bit integer type, for int32_t
to be a typedef
for.