19

In C++11 we are provided with fixed-width integer types, such as std::int32_tand std::int64_t, which are optional and therefore not optimal for writing cross-platform code. However, we also got non-optional variants for the types: e.g. the "fast" variants, e.g. std::int_fast32_tand std::int_fast64_t, as well as the "smallest-size" variants, e.g. std::int_least32_t, which both are at least the specified number of bits in size.

The code I am working on is part of a C++11-based cross-platform library, which supports compilation on the most popular Unix/Windows/Mac compilers. A question that now came up is if there is an advantage in replacing the existing integer types in the code by the C++11 fixed-width integer types.

A disadvantage of using variables like std::int16_t and std::int32_t is the lack of a guarantee that they are available, since they are only provided if the implementation directly supports the type (according to http://en.cppreference.com/w/cpp/types/integer).

However, since int is at least 16 bits and 16-bit are large enough for the integers used in the code, what about the usage of std::int_fast16_t over int? Does it provide a benefit to replace all int types by std::int_fast16_t and all unsigned int's by std::uint_fast16_t in that way or is this unnecessary?

Anologously, if knowing that all supported platforms and compilers feature an int of at least 32 bits size, does it make sense to replace them by std::int_fast32_t and std::uint_fast32_t respectively?

Ident
  • 1,184
  • 11
  • 25
  • 4
    Your "disadvantage" seems to be based on an assumption. Who says that `std::int16_t` and `std::int32_t` might go away? After all, they're required by the *standard*. – Greg Hewgill Mar 22 '16 at 17:36
  • 3
    If you want to be sure that your int is at least 32bit wide, do it. Normal `int` does not give such guarantee. – Revolver_Ocelot Mar 22 '16 at 17:39
  • 3
    @GregHewgill they are optional. I meant they might not be supported in future versions of an OS & compiler, if they were supported there before. I am not aware of any OS&compiler that does not provide std::int16_t and std::int32_t but since they are only non-optional if an integer of this size already exists natively, their existance cannot be guaranteed. See also: http://stackoverflow.com/questions/32155759/state-of-support-for-the-optional-fixed-width-integer-types-introduced-in-c11 where you may answer if you know more :) – Ident Mar 22 '16 at 17:39
  • 1
    It's hard to predict the future. Potentially, everything can 'go away' in new standard. Practically it is unlikely, though :) However, the benefit of `fast` types is sound - they are mandatory in **current** version. Can next standard make them optional? Yes. But! They are not guranteed to be exactly that many bits. They might be **wider**. – SergeyA Mar 22 '16 at 17:40
  • The advantage of `int_fast32_t` over plain `int` lies largely in ports to embedded platforms where integers are only 16-bits from code-bases developed and tested in 32-bit environments where a direct translation would be likely to yield overflow bugs. Of course a whole-sale translation of a complete codebase is likely to generate bugs in and of itself, so I probably wouldn't bother without an actual target in view. – doynax Mar 22 '16 at 17:41
  • @Revolver_Ocelot Indeed int is not guaranteed to be 32bit wide and that was not my assumption, I will clarify this in my question. – Ident Mar 22 '16 at 17:41
  • I forgot that int can also be 16-bit, since our supported platforms and compilers all have 32-bit ints minimum. This made my question be understood the wrong way. I fixed this and am now asking about both the 16-bit and 32-bit "ints" (depending on the situation) – Ident Mar 22 '16 at 17:53
  • We really don't know what `std::int_fast16_t` is. Is it supposed to be 32 bits if calculations that size is slightly faster, or 16 bits to save cache space? And will your program still work if it happens to be 24 bits, or 36 bits? Probably not. – Bo Persson Mar 22 '16 at 18:35
  • 2
    @GregHewgill - they're "optional" in the sense that they need not be present if the target platform doesn't have a reasonable type of that size. That's unusual these days, but there's no inherent reason for integer types to be powers of 2; some processors in the olden days had multiples of 9. On a platform like that, `int16_t` might not exist, but `int_fast16_t` and `int_least16_t` will. – Pete Becker Mar 22 '16 at 18:41
  • 1
    _"the most popular Unix/Windows/Mac compilers"_ well then you don't need to worry about portability to platforms without `int32_t` and `int16_t`. A future version of an OS/compiler won't stop supporting them, because in practice their existence depends on the hardware, not the OS or compiler. You only need to care that they might not exist on exotic hardware. If you're not compiling for 24-bit DSPs or weird 31-bit hardware then you're wasting your time worrying about it. – Jonathan Wakely Mar 22 '16 at 19:27
  • @BoPersson if you write portable code in the first place, then probably yes ! – M.M Mar 22 '16 at 20:07
  • Another case to consider is when the platform word size is 64-bit. Using `int_fast32_t` instead of `int32_t` may help the compiler to avoid emitting useless instructions – M.M Mar 22 '16 at 20:08

2 Answers2

26

int can be 16, 32 or even 64 bit on current computers and compilers. In the future, it could be bigger (say, 128 bits).

If your code is ok with that, go with it.

If your code is only tested and working with 32 bit ints, then consider using int32_t. Then the code will fail at compile time instead of at run time when run on a system that doesn't have 32 bit ints (which is extremely rare today).

int_fast32_t is when you need at least 32 bits, but you care a lot about performance. On hardware that a 32 bit integer is loaded as a 64 bit integer, then bitshifted back down to a 32 bit integer in a cumbersome process, the int_fast_32_t may be a 64 bit integer. The cost of this is that on obscure platforms, your code behaves very differently.

If you are not testing on such platforms, I would advise against it.

Having things break at build time is usually better than having breaks at run time. If and when your code is actually run on some obscure processor needing these features, then fix it. The rule of "you probably won't need it" applies.

Be conservative, generate early errors on hardware you are not tested on, and when you need to port to said hardware do the work and testing required to be reliable.

In short:

Use int_fast##_t if and only if you have tested your code (and will continue to test it) on platforms where the int size varies, and you have shown that the performance improvement is worth that future maintenance.

Using int##_t with common ## sizes means that your code will fail to compile on platforms that you have not tested it on. This is good; untested code is not reliable, and unreliable code is usually worse than useless.

Without using int32_t, and using int, your code will sometimes have ints that are 32 and sometimes ints that are 64 (and in theory more), and sometimes ints that are 16. If you are willing to test and support every such case in every such int, go for it.

Note that arrays of int_fast##_t can have cache problems: they could be unreasonably big. As an example, int_fast16_t could be 64 bits. An array of a few thousand or million of them could be individually fast to work with, but the cache misses caused by their bulk could make them slower overall; and the risk that things get swapped out to slower storage grows.

int_least##_t can be faster in those cases.

The same applies, doubly so, to network-transmitted and file-stored data, on top of the obvious issue that network/file data usually has to follow formats that are stable over compiler/hardware changes. This, however, is a different question.

However, when using fixed width integer types you must pay special attention to the fact that int, long, etc. still have the same width as before. Integer promotion still happens based on the size of int, which depends on the compiler you are using. An integral number in your code will be of type int, with the associated width. This can lead to unwanted behaviour if you compile your code using a different compiler. For more detailed info: https://stackoverflow.com/a/13424208/3144964

Community
  • 1
  • 1
Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • First, thanks for the quick and helpful answer! Unfortunately you wrote this answer so fast I was still fixing a mistake I made in my question (forgot to mention the assumption that the ints in the supported systems are at least 32 bit) and now the question is expanded a bit. Sorry for that. – Ident Mar 22 '16 at 17:59
  • @ident I added a paragraph or two. – Yakk - Adam Nevraumont Mar 22 '16 at 18:47
  • 1
    You should point out the use cases for `int_leastxx*` and `int_fast`. Fastest might not be the best if you have to store such integers, nothing prevents sizeof(int_fast32_t) from being unreasonably big, which due to cache might actually made it slower than `int_least32_t`. – sbabbi Mar 23 '16 at 13:30
  • Thanks for updating your answer, still what about this http://stackoverflow.com/a/13424208/3144964 – Ident Mar 23 '16 at 19:43
1

I have just realised that the OP is just asking about int_fast##_t not int##_t since the later is optional. However, I will keep the answer hopping it may help someone.


I would add something. Fixed size integers are so important (or even a must) for building APIs for other languages. One example is when when you want to pInvoke functions and pass data to them in a native C++ DLL from a .NET managed code for example. In .NET, int is guaranteed to be a fixed size (I think it is 32bit). So, if you used int in C++ and it was considered as 64-bit rather than 32bit, this may cause problems and cuts down the sequence of wrapped structs.

Humam Helfawi
  • 19,566
  • 15
  • 85
  • 160