To my understanding, int
was initially supposed to be a "native" integer type with additional guarantee that it should be at least 16 bits in size - something that was considered "reasonable" size back then.
When 32-bit platforms became more common, we can say that "reasonable" size has changed to 32 bits:
- Modern Windows uses 32-bit
int
on all platforms.
- POSIX guarantees that
int
is at least 32 bits.
- C#, Java has type
int
which is guaranteed to be exactly 32 bits.
But when 64-bit platform became the norm, no one expanded int
to be a 64-bit integer because of:
- Portability: a lot of code depends on
int
being 32 bit in size.
- Memory consumption: doubling memory usage for every
int
might be unreasonable for most cases, as in most cases numbers in use are much smaller than 2 billion.
Now, why would you prefer uint32_t
to uint_fast32_t
? For the same reason languages, C# and Java always use fixed size integers: programmer does not write code thinking about possible sizes of different types, they write for one platform and test code on that platform. Most of the code implicitly depends on specific sizes of data types. And this is why uint32_t
is a better choice for most cases - it does not allow any ambiguity regarding its behavior.
Moreover, is uint_fast32_t
really the fastest type on a platform with a size equal or greater to 32 bits? Not really. Consider this code compiler by GCC for x86_64 on Windows:
extern uint64_t get(void);
uint64_t sum(uint64_t value)
{
return value + get();
}
Generated assembly looks like this:
push %rbx
sub $0x20,%rsp
mov %rcx,%rbx
callq d <sum+0xd>
add %rbx,%rax
add $0x20,%rsp
pop %rbx
retq
Now if you change get()
's return value to uint_fast32_t
(which is 4 bytes on Windows x86_64) you get this:
push %rbx
sub $0x20,%rsp
mov %rcx,%rbx
callq d <sum+0xd>
mov %eax,%eax ; <-- additional instruction
add %rbx,%rax
add $0x20,%rsp
pop %rbx
retq
Notice how generated code is almost the same except for additional mov %eax,%eax
instruction after function call which is meant to expand 32-bit value into a 64-bit value.
There is no such issue if you only use 32-bit values, but you will probably be using those with size_t
variables (array sizes probably?) and those are 64 bits on x86_64. On Linux uint_fast32_t
is 8 bytes, so the situation is different.
Many programmers use int
when they need to return small value (let's say in the range [-32,32]). This would work perfectly if int
would be platforms native integer size, but since it is not on 64-bit platforms, another type which matches platform native type is a better choice (unless it is frequently used with other integers of smaller size).
Basically, regardless of what standard says, uint_fast32_t
is broken on some implementations anyway. If you care about additional instruction generated in some places, you should define your own "native" integer type. Or you can use size_t
for this purpose, as it will usually match native
size (I am not including old and obscure platforms like 8086, only platforms that can run Windows, Linux etc).
Another sign that shows int
was supposed to be a native integer type is "integer promotion rule". Most CPUs can only perform operations on native, so 32 bit CPU usually can only do 32-bit additions, subtractions etc (Intel CPUs are an exception here). Integer types of other sizes are supported only through load and store instructions. For example, the 8-bit value should be loaded with appropriate "load 8-bit signed" or "load 8-bit unsigned" instruction and will expand value to 32 bits after load. Without integer promotion rule C compilers would have to add a little bit more code for expressions that use types smaller than native type. Unfortunately, this does not hold anymore with 64-bit architectures as compilers now have to emit additional instructions in some cases (as was shown above).