4

Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.

Wouldn't it increase performance to resolve such types at compile time to account for the case by chosing the optimal type for the actual use? The compiler would analyze the use of a _fast variable and then chose the optimal type. Factors coming into play could be alignment and the kind of operations used with the variable.

This would effectively make those types a language feature.

This could introduce bugs when the compiler suddenly decides to choose another width for such a variable. But one shouldn't use a _fast type in such use cases, where the behaviour depends on the width, anyways.

Is such compile time resolval permitted by the standard? If yes, why isn't it implemented as of today? If no, why isn't it in the standard?

haslersn
  • 515
  • 5
  • 10
  • 4
    Uh... These types are typedefs, not dynamic-width types. They *are* resolved at compile-time (at standard-libary-writing time, in fact) – Quentin Jan 16 '17 at 10:30
  • 1
    @Quentin i think he is suggesting that the compiler could choose per compilation, instead of having it defined in a library header that never changes – M.M Jan 16 '17 at 10:43
  • @M.M OH. Yes, the question makes perfect sense now... May my comment help other sleepy people :) – Quentin Jan 16 '17 at 10:46
  • How would you propose the compiler gains new information about the target CPU between compilations? – eerorika Jan 16 '17 at 10:48
  • 2
    upvoted as it seems like a reasonable thing to ask, although I'm sure the answer will be that there are insurmountable difficulties with both the ABI and with the actual decision algorithm the compiler would use to choose – M.M Jan 16 '17 at 10:48
  • I will risk and ask for a proof of this statement "*there cannot be one fastest type for every case*". What are the scenarios in which thew same actual width is optimal for one case, but isn't for the other? Or do you propose a virtual CPU which deliberately acts in such a way. And maybe more importantly what is the basis that *fast* types are actually the fastest... cppreference uses such statement but skimming through [c++14 draft](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3797.pdf) I haven't found anything except that it is guaranteed to be defined. – luk32 Jan 16 '17 at 13:48
  • @luk32 First of all, there are different c++ implementations for x86 with different typedefs for for instance uint_fast8_t. These obviously can't all be the fastest. When a value is passed around a lot, using a smaller type might be faster, while when doing arithmetic operations, using the register sized type should be faster. Also, see http://stackoverflow.com/questions/4116297/x86-64-why-is-uint-least16-t-faster-then-uint-fast16-t-for-multiplication – haslersn Jan 16 '17 at 14:19

1 Answers1

1

No, this is not permitted by the standard. Keep in mind the C++ standard defers to C for this particular area, for example, C++11 defers to C99, as per C++11 1.1 /2. Specifically, C++11 18.4.1 Header <cstdint> synopsis /2 states:

The header defines all functions, types, and macros the same as 7.18 in the C standard.

So let's get your first contention out of the way, you state:

Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.

The C standard has this to say, in c99 7.18.1.3 Fastest minimum-width integer types (my italics):

Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.

The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.

So you're indeed correct that a type cannot be fastest for all possible uses but this seems to not be what the authors had in mind in defining these aspects.

The introduction of the fixed-width types was (in my opinion) to solve the problem all those developers had in having different int widths across the various implementations.

Similarly, once a developer knows the range of values they want, the fast minimum-width types give them a way to do arithmetic on those values at the maximum possible speed.

Covering your three specific questions in your final paragraph (in bold below):


(1) Is such compile time resolution permitted by the standard?

I don't believe so. The relevant part of the C standard has this little piece of text:

For each type described herein that the implementation provides, <stdint.h> shall declare that typedef name and define the associated macros.

That seems to indicate that it must be a typedef provided by the implementation and, since there are no "variable" typedefs, it has to be fixed.

There may be wiggle room because it could be possible to provide a different typedef depending on certain environmental considerations but the difficulty in actually implementing this seems very high (see my answer to your third question below).

Chief amongst these is that these adaptable types, should they have external linkage, would require agreement amongst all the compiled translation units when linked together. Having one unit with a 16-bit type and another with a 32-bit type is going to cause all sorts of problems.


(2) If yes, why isn't it implemented as of today?

I'm pushing "no" as an answer to your first question so I'm not going to speculate on this other than by referring you to the answer to the third question below (it's probably not implemented because it's very hard, with dubious benefits).


(3) If no, why isn't it in the standard?

A standard is a contract between the implementor and the user and describes what the implementor will provide. It's usual that the standards committees tend to be more populated by the former (who aren't that keen on making too much extra work for themselves) than the latter.

For example, I would love to have all the you-beaut C++ data structures in C but this would have the consequence that standards versions would be decades apart rather than years :-)

Community
  • 1
  • 1
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • 1
    One of the "environmental concerns" is that the datatype in question never appears in the type of an object with external linkage. If it does, it must conform to the platform ABI to allow interop, and then all uses (even those without external linkage) need to be consistent. – rici Feb 02 '17 at 03:43