152

So the reason for typedef:ed primitive data types is to abstract the low-level representation and make it easier to comprehend (uint64_t instead of long long type, which is 8 bytes).

However, there is uint_fast32_t which has the same typedef as uint32_t. Will using the "fast" version make the program faster?

Emil Laine
  • 41,598
  • 9
  • 101
  • 157
Amumu
  • 17,924
  • 31
  • 84
  • 131
  • 1
    long long is maybe not 8 bytes, it is possible to have a long long with 1 byte (in case CHAR_BIT is at least 64) or with 3738383 bytes. also uint64_t can be 1,2,4 or 8 bytes, CHAR_BIT must be 64, 3, 16 or 8 for that. – 12431234123412341234123 Dec 15 '16 at 09:58

4 Answers4

188
  • int may be as small as 16 bits on some platforms. It may not be sufficient for your application.
  • uint32_t is not guaranteed to exist. It's an optional typedef that the implementation must provide iff it has an unsigned integer type of exactly 32-bits. Some have a 9-bit bytes for example, so they don't have a uint32_t.
  • uint_fast32_t states your intent clearly: it's a type of at least 32 bits which is the best from a performance point-of-view. uint_fast32_t may be in fact 64 bits long. It's up to the implementation.
  • There's also uint_least32_t in the mix. It designates the smallest type that's at least 32 bits long, thus it can be smaller than uint_fast32_t. It's an alternative to uint32_t if the later isn't supported by the platform.

... there is uint_fast32_t which has the same typedef as uint32_t ...

What you are looking at is not the standard. It's a particular implementation (BlackBerry). So you can't deduce from there that uint_fast32_t is always the same as uint32_t.

See also:

Yakov Galka
  • 70,775
  • 16
  • 139
  • 220
  • 42
    Good answer. For completeness, one could maybe point out the difference to `uint_least32_t` too, which is the same as `uint_fast32_t` except it favours smaller store rather than speed. – Damon Dec 14 '11 at 11:14
  • 3
    Why would the fastest integer that is at least 32-bit in width to be larger than 32-bit? I always thought if there's less bits, there will be less bits CPU has to work on, thus faster. What am I missing here? – Shane Hsu Sep 03 '13 at 11:24
  • 19
    @ShaneHsu: say a 64-bit cpu will have a 64-bit bit summer, which sums 64-bit numbers in one cycle. It doesn't matter if all you want to do is to work on 32-bit numbers, it's not going to be faster than one cycle. Now, although it is not so on x86/amd64, 32-bit integers may not be even addressable. In such a case working on them requires additional ops to extract the 32-bits from, say, 64-bit aligned units. See also the linked question. C++ standard is written so that it could work on a machine that has 37-bit words... so no 32-bit type there at all. – Yakov Galka Sep 03 '13 at 11:47
  • int_least32_t and int_fast32_t would allow use on a 37bit machine while assuring at least 32bit capacity and avoiding ambiguity with traditional "int" that is only defined as equal or greater than short and equal or less than long. (short and long have similar relative definitions) least or fast implies memory or processing optimization. It is also an option to use a pre-processor `define` to set the intN_t to the exact N optimized for each particular machine, compiler, and function combination. Though this is a bit manual and old fashioned for most general purpose programs. – Max Power Jun 14 '22 at 04:17
55

The difference lies in their exact-ness and availability.

The doc here says:

unsigned integer type with width of exactly 8, 16, 32 and 64 bits respectively (provided only if the implementation directly supports the type):

uint8_t
uint16_t
uint32_t
uint64_t

And

fastest unsigned unsigned integer type with width of at least 8, 16, 32 and 64 bits respectively

uint_fast8_t
uint_fast16_t
uint_fast32_t
uint_fast64_t    

So the difference is pretty much clear that uint32_t is a type which has exactly 32 bits, and an implementation should provide it only if it has type with exactly 32 bits, and then it can typedef that type as uint32_t. This means, uint32_t may or may not be available.

On the other hand, uint_fast32_t is a type which has at least 32 bits, which also means, if an implementation may typedef uint32_t as uint_fast32_t if it provides uint32_t. If it doesn't provide uint32_t, then uint_fast32_t could be a typedef of any type which has at least 32 bits.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Nawaz
  • 353,942
  • 115
  • 666
  • 851
  • 5
    But what is the reason that makes for example uint_fast32_t faster than uint32_t? Why it is faster? – Destructor Jul 06 '15 at 17:05
  • 2
    @PravasiMeet: Not all integers accessed in the same way. Some are easier to access than others. Easier means less-computation, more direct, which results in faster access. Now `uint32_t` is exactly 32-bit on all systems (if it exists), which might not be faster compared to the one which has, say, 64-bit. `uint_fast32_t` on the other hand **at least** 32 bit, could be even 64-bit. – Nawaz Jul 06 '15 at 17:31
  • 15
    @Destructor: On some processors, if a variable gets stored in a register which is longer, the compiler may have to add extra code to lop off any extra bits. For example, if `uint16_t x;` gets stored in a 32-bit register on the ARM7-TDMI, the code `x++;` may need to be evaluated as `x=((x+1)<<16)>>16);`. On compilers for that platform, `uint_fast16_t` would most likely be defined as synonymous with `uint32_t` to avoid that. – supercat Mar 07 '16 at 20:51
  • why are `[u]int_(fast|least)N_t` not also optional? Surely not all architectures are required by the Standard to support primitive types of at least 64 bits? Yet the wording for `stdint.h` implies that they must. It seems weird to me that we have been enforcing that since 1999, some years before 64-bit computing became mainstream - to say nothing of the lag behind that (in many cases still current) of embedded architectures. This seem like a big oversight to me. – underscore_d Oct 29 '17 at 20:27
  • @underscore_d: An implementation cannot conform to C99 or C11 unless it supports an unsigned 64-bit type with a fully-binary representation. This is a bit ironic given the Standard's willingness to bend over backward to allow ones'-complement and sign-magnitude formats, since any platform that can support multi-precision arithmetic can also support two's-complement math. I know of one compiler for a ones'-complement platform, and it had a signed long long (which I would guess might not have been stored in straight binary) but no unsigned one. – supercat Apr 27 '18 at 22:19
  • @supercat Yeah, I found that out shortly afterwards. It is a weird requirement given very esoteric allowances elsewhere. But I have heard rumblings that we might be on the way to requiring 2's complement (at least superficially; I guess platforms could emulate it if needed)... which would at least make this less strange, if not necessarily any more useful for anyone on such a platform. – underscore_d Apr 29 '18 at 09:51
  • @underscore_d: IMHO, the Standard should focus more on specifying the meaning of code *that is accepted*, than in mandating things that all implementations must accept. That would allow the "One Program Rule" to be replaced with something much more useful--while no implementation would be required to process any particular program, one could have a category of programs much larger than "Strictly Conforming", which would be guaranteed to run correctly on all implementation that don't reject them. – supercat Apr 29 '18 at 15:37
  • 1
    @underscore_d: There's no particular reason, for example, that the Standard couldn't be applicable to a PIC12 implementation with 16 bytes of data RAM and space for 256 instructions. Such an implementation would need to reject a lot of programs, but that shouldn't prevent it from behaving in defined fashion for programs whose needs it could satisfy. – supercat Apr 29 '18 at 15:39
5

When you #include inttypes.h in your program, you get access to a bunch of different ways for representing integers.

The uint_fast*_t type simply defines the fastest type for representing a given number of bits.

Think about it this way: you define a variable of type short and use it several times in the program, which is totally valid. However, the system you're working on might work more quickly with values of type int. By defining a variable as type uint_fast*t, the computer simply chooses the most efficient representation that it can work with.

If there is no difference between these representations, then the system chooses whichever one it wants, and uses it consistently throughout.

Harley Sugarman
  • 283
  • 1
  • 3
  • 10
  • 11
    Why inttypes.h and not stdint.h? It seems that inttypes.h only contains various mildly useful fluff, plus an include of stdint.h? – Lundin Dec 14 '11 at 08:57
  • @underscore_d I know the difference. But who uses stdio.h in professional programs, no matter area of application? – Lundin Jul 14 '16 at 16:21
  • @Lundin I have no idea who they are, or whether they exist! I just thought it might be useful to provide a link elaborating on what that "mildly useful fluff" is ;-) Perhaps it'll help people to realise you're right and they _don't_ need it. – underscore_d Jul 14 '16 at 16:28
4

Note that the fast version could be larger than 32 bits. While the fast int will fit nicely in a register and be aligned and the like: but, it will use more memory. If you have large arrays of these your program will be slower due to more memory cache hits and bandwidth.

I don't think modern CPUS will benefit from fast_int32, since generally the sign extending of 32 to 64 bit can happen during the load instruction and the idea that there is a 'native' integer format that is faster is old fashioned.

Gil Colgate
  • 119
  • 5