43

As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.

Isn't that exactly what the C99 definition of int_fast16_t is too?

Maybe they put it in there just for consistency, since the other int_fastXX_t are needed?

Update

To summarize discussion below:

  • My question was wrong in many ways. The C standard does not specify bitness for int. It gives a range [-32767,32767] that it must contain.
  • I realize at first most people would say, "but that range implies at least 16-bits!" But C doesn't require two's-compliment storage of integers. If they had said "16-bit", there may be some platforms that have 1-bit parity, 1-bit sign, and 14-bit magnitude that would still being "meeting the standard", but not satisfy that range.
  • The standard does not say anything about int being the most efficient type. Aside from size requirements above, int can be decided by the compiler developer based on whatever criteria they deem most important. (speed, size, backward compatibility, etc)
  • On the other hand, int_fast16_t is like providing a hint to the compiler that it should use a type that is optimum for performance, possibly at the expense of any other tradeoff.
  • Likewise, int_least16_t would tell the compiler to use the smallest type that's >= 16-bits, even if it would be slower. Good for preserving space in large arrays and stuff.

Example: MSVC on x86-64 has a 32-bit int, even on 64-bit systems. MS chose to do this because too many people assumed int would always be exactly 32-bits, and so a lot of ABIs would break. However, it's possible that int_fast32_t would be a 64-bit number if 64-bit values were faster on x86-64. (Which I don't think is actually the case, but it just demonstrates the point)

something_clever
  • 806
  • 1
  • 7
  • 17
  • 1
    Isn't this a C specific question? Why the c++ tag? – Olivier Poulin Jun 19 '15 at 15:44
  • 7
    C++ inherited type "int" from ANSI-C anyway, and the new C++11 standard inherits all the C99 typedefs. I think it's resonable that this question applies equally to C++. – something_clever Jun 19 '15 at 15:46
  • @ask_me_about_loom: But you already know why C++ includes them: it includes them because they're part of a whole swath of C that C++ includes wholesale. So your actual question is really only about C. (+1, by the way. This is an interesting question!) – ruakh Jun 19 '15 at 22:46
  • There's no guarantee of `int` whatsoever except that it can hold at least 15 bits. On all 8 bit computers, `int` is not the most efficient type possible. Nor on 64 bits systems. – Lundin Feb 07 '18 at 16:32
  • @Lundin I suggest you read the rest of this thread, because your statement is less accurate than other answers given below. Specifically, the C standard doesn't say anything about 'int' being >= 15-bits. – something_clever Feb 08 '18 at 17:15
  • @ask_me_about_loom The C standard explicitly states that int must be able to hold the values -32767 to 32767. Have you found a way to do this in less than 15 bits + sign bit? If so, it would be a computer science revolution. – Lundin Feb 09 '18 at 08:04
  • @Lundin You are arguing the exact same thing I originally did until I was also shown the nuance in this by Kevin in one of the other threads. You're right that I could never make a type that holds that range with less than 15 bits + sign, BUT I could make a type that is 16-bits but doesn't hold that range. (For example - a parity bit). This is why the standard specifies the ranges instead of saying "16-bit". – something_clever Feb 10 '18 at 15:11
  • @ask_me_about_loom The point here is that `int` can't hold the required number range and still be the most efficient type to use for a 8 bit MCU. – Lundin Feb 12 '18 at 07:36
  • @Lundin - Yep - said that a long time ago in my edit to my original comment ^ – something_clever Feb 12 '18 at 20:33

7 Answers7

35

int is a "most efficient type" in speed/size - but that is not specified by per the C spec. It must be 16 or more bits.

int_fast16_t is most efficient type in speed with at least the range of a 16 bit int.

Example: A given platform may have decided that int should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.

Example: In a 64-bit machine, where one would expect to have int as 64-bit, a compiler may use a mode with 32-bit int compilation for compatibility. In this mode, int_fast16_t could be 64-bit as that is natively the fastest width for it avoids alignment issues, etc.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • 7
    This seems to make the most sense... So you're saying that "int" may not be selected for speed, but some kind of space/time tradeoff that the compiler developer makes for the platform, whereas int_fast16_t is like you saying, "I don't care about space, I want it fast"? Seems like the right answer... – something_clever Jun 19 '15 at 16:00
  • @ask_me_about_loom Another example: In C, `int/unsigned` have a special property over other narrower types due to "integer promotions". This features has many impacts aside from speed/size including undefined behavior for narrower unsigned types experiencing overflow. So making `int` the fastest has many code repercussions. So much code _assumes_ 32-bit `int` that going to 64-bit `int` may have subtle negative effects. – chux - Reinstate Monica Jun 19 '15 at 16:10
  • 1
    The C standard does not contain the phrase "most efficient type", nor does it explicitly say anything about the efficiency of type `int` (or rather of operations on type `int`). – Keith Thompson Jun 19 '15 at 16:14
  • 1
    @Keith Thompson C has little efficiency (if any) guarantees either in speed or size. My "most efficient type" is from OP's post and is attempting to steer the idea of "efficiency" from a single dimensional issue (speed) to many things including speed, size, compatibility - of which `int` tries to satisfy. `int_fast16_t` is per spec intended to be _a_ fast data type holding at least 16-bits. – chux - Reinstate Monica Jun 19 '15 at 16:20
  • 1
    @chux Indeed, I fear I was thinking a little bit one-dimensionally about the issue. It makes sense that they created these "fast" types to literally say, "screw other tradeoffs, I want speed" – something_clever Jun 19 '15 at 16:24
  • Yes, the OP used the phrase "most efficient type", and implies that that's what the standard says `int` should be. In fact the OP is mistaken on that point, and you didn't correct the OP's error in your answer. – Keith Thompson Jun 19 '15 at 16:25
  • @ask_me_about_loom Not for me, but you may want to leave this _good_ question unaccepted for a day or so. It really gets to the heart of integer aspects of C and I am sure many good insights could be posted and applied. One could write a small book about it even. – chux - Reinstate Monica Jun 19 '15 at 16:31
  • 1
    @chux Agreed. C really is one of those languages that is "a week to learn, a lifetime to master". The subtleties of its type system never cease to amaze, haha. – something_clever Jun 19 '15 at 19:00
28

int_fast16_t is guaranteed to be the fastest int with a size of at least 16 bits. int has no guarantee of its size except that:

 sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).

And that it can hold the range of -32767 to +32767.

(7.20.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."

NathanOliver
  • 171,901
  • 28
  • 288
  • 402
  • Instant downvote? @NathanOliver is correct in his statement. int in C's size depends on the system, although it usually is 4 bytes, it can have a higher minimum. int_fast16_t is always going to have at least 4 bytes, no matter what. – Olivier Poulin Jun 19 '15 at 15:53
  • 6
    Your statement about the relative sizes of the types is correct, but you're wrong about there not being minimum sizes. The C standard says "char" must be at least 8 bits, "short" at least 16, "int" at least 16, and "long" at least 32 – something_clever Jun 19 '15 at 15:55
  • int doesn't need to have a minimum of 16 bits, thats dependant on the processor, it could have a minimum of 32 bits, which makes it different from int_fast16_t – Olivier Poulin Jun 19 '15 at 15:57
  • 4
    @OlivierPoulin `int` DOES need to have a minimum of 16 bits. It's in the C standard. – ElderBug Jun 19 '15 at 15:58
  • 2
    @ElderBug I can't find anything on it with calling out 16 bits but it does have a specified range that is 16 bits. I amended my answer. – NathanOliver Jun 19 '15 at 16:03
  • 1
    And how would you implement an integer that satisfy this range but doesn't need to be 16 bits ? It needs to be 16 bits, thus this answer doesn't answer the question. – ElderBug Jun 19 '15 at 16:06
  • @NathanOliver Strike that, notice you already changed your comment. – something_clever Jun 19 '15 at 16:07
  • @ElderBug I get that. I am just calling it out as it is in the standard. – NathanOliver Jun 19 '15 at 16:08
  • 1
    @OlivierPoulin Yes... 32 bits is larger than 16-bits, so a platform that has a 32-bit int would still be following the C standard... We're not talking about the platform-minimum, we're talking about the C standard. The C standard says that every platform that has a conformant C implementation must have an "int" that is at least 16-bits wide. – something_clever Jun 19 '15 at 16:11
  • @ask_me_about_loom Where? All I can find is a range requirement. I realize that it translates to 16 bits as that is the size of the range but I can't find it saying 16 bits. If it doesn't say it then I shouldn't in my answer. – NathanOliver Jun 19 '15 at 16:14
  • 4
    @NathanOliver By specifying a range, they are necessarily specifying a minimum size. Just because they didn't say the words doesn't mean the implication isn't there. It would be mathematically impossible in a base-2 number system to represent [-32767,32767] with anything less than 16 bits, and C is not portable to anything that isn't a base-2 number system. – something_clever Jun 19 '15 at 16:22
  • @ask_me: The C standard does not mandate the use of two's complement, so 16 bits may not be sufficient on some really weird architectures. – Kevin Jun 19 '15 at 16:22
  • 1
    @Kevin I appreciate the 2c, but notice we've all been using the word "minimum" here. 16-bits is the minimum for an "int". Yeah, an "int" could be implemented as a 17-bit "sign-mantissa-parity" and still be C-conformant. Nobody's arguing against that, we're just saying that a 15-bit "int" would not be compliant. – something_clever Jun 19 '15 at 16:27
  • 7
    @ask_me_about_loom: But if the standard said "16-bit minimum," then a *16-bit* sign-mantissa implementation would be legal even if it didn't match the range requirements. That's why the range is specified and that's why the range is the *only* correct way to describe what the standard actually says. – Kevin Jun 19 '15 at 16:28
  • 6
    @Kevin Holy hell, you're right! I suppose there is a way that a type could have 16-bits but still not be able to hold that range! Whoa! Mind blown! I've always wondered why they give the range, not just say "16 bits". This is an EXCELLENT point! – something_clever Jun 19 '15 at 16:31
  • @ask_me_about_loom: In particular on an old Cray that I programmed, the internal integer representation was signed magnitude, so a 16 bit signed integer type could only hold the range specified by the standard. Signed magnitude also has negative zero, so a conformant implementation has to deal with two kinds of compare-with-zero. Additionally, C has been ported to non-binary architectures. It's a stupid stunt, but it's been done. – Eric Towers Jun 20 '15 at 03:53
8

As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.

Here's what the standard actually says about int: (N1570 draft, section 6.2.5, paragraph 5):

A "plain" int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>).

The reference to INT_MIN and INT_MAX is perhaps slightly misleading; those values are chosen based on the characteristics of type int, not the other way around.

And the phrase "the natural size" is also slightly misleading. Depending on the target architecture, there may not be just one "natural" size for an integer type.

Elsewhere, the standard says that INT_MIN must be at most -32767, and INT_MAX must be at least +32767, which implies that int is at least 16 bits.

Here's what the standard says about int_fast16_t (7.20.1.3):

Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.

with a footnote:

The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.

The requirements for int and int_fast16_t are similar but not identical -- and they're similarly vague.

In practice, the size of int is often chosen based on criteria other than "the natural size" -- or that phrase is interpreted for convenience. Often the size of int for a new architecture is chosen to match the size for an existing architecture, to minimize the difficulty of porting code. And there's a fairly strong motivation to make int no wider than 32 bits, so that the types char, short, and int can cover sizes of 8, 16, and 32 bits. On 64-bit systems, particularly x86-64, the "natural" size is probably 64 bits, but most C compilers make int 32 bits rather than 64 (and some compilers even make long just 32 bits).

The choice of the underlying type for int_fast16_t is, I suspect, less dependent on such considerations, since any code that uses it is explicitly asking for a fast 16-bit signed integer type. A lot of existing code makes assumptions about the characteristics of int that go beyond what the standard guarantees, and compiler developers have to cater to such code if they want their compilers to be used.

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
2

The difference is that the fast types are allowed to be wider than their counterparts (without fast) for efficiency/optimization purposes. But the C standard by no means guarantees they are actually faster.

C11, 7.20.1.3 Fastest minimum-width integer types

1 Each of the following types designates an integer type that is usually fastest 262) to operate with among all integer types that have at least the specified width.

2 The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N.

262) The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.

Another difference is that fast and least types are required types whereas other exact width types are optional:

3 The following types are required: int_fast8_t int_fast16_t int_fast32_t int_fast64_t uint_fast8_t uint_fast16_t uint_fast32_t uint_fast64_t All other types of this form are optional.

P.P
  • 117,907
  • 20
  • 175
  • 238
  • It by no means guarantees they are actually faster. But they must not be slower. – Persixty Jun 19 '15 at 16:00
  • @DanAllen The standard simply gives the flexibility to choose the underlying types for these. I don't think it mandates it *shouldn't* slower. Of course, compilers won't delibarately do that. If an implementation chooses some type for `int_fast8_t` and it turns out to be slower than `int8_t`, are you suggesting it violates any requirements from the standard? – P.P Jun 19 '15 at 16:05
  • I think then it describes them as 'fastest' we are to take them as just that. It doesn't define fastest. However on reflection I withdraw the word 'must' and would put in 'should'. The specification allows for some operations to be faster and others slower but in practice that's not how architectures tend to work. – Persixty Jun 19 '15 at 16:36
2

From the C99 rationale 7.8 Format conversion of integer types <inttypes.h> (document that accompanies with Standard), emphasis mine:

C89 specifies that the language should support four signed and unsigned integer data types, char, short, int and long, but places very little requirement on their size other than that int and short be at least 16 bits and long be at least as long as int and not smaller than 32 bits. For 16-bit systems, most implementations assign 8, 16, 16 and 32 bits to char, short, int, and long, respectively. For 32-bit systems, the common practice is to assign 8, 16, 32 and 32 bits to these types. This difference in int size can create some problems for users who migrate from one system to another which assigns different sizes to integer types, because Standard C’s integer promotion rule can produce silent changes unexpectedly. The need for defining an extended integer type increased with the introduction of 64-bit systems.

The purpose of <inttypes.h> is to provide a set of integer types whose definitions are consistent across machines and independent of operating systems and other implementation idiosyncrasies. It defines, via typedef, integer types of various sizes. Implementations are free to typedef them as Standard C integer types or extensions that they support. Consistent use of this header will greatly increase the portability of a user’s program across platforms.

The main difference between int and int_fast16_t is that the latter is likely to be free of these "implementation idiosyncrasies". You may think of it as something like:

I don't care about current OS/implementation "politics" of int size. Just give me whatever the fastest signed integer type with at least 16 bits is.

Grzegorz Szpetkowski
  • 36,988
  • 6
  • 90
  • 137
1

On some platforms, using 16-bit values may be much slower than using 32-bit values [e.g. an 8-bit or 16-bit store would require performing a 32-bit load, modifying the loaded value, and writing back the result]. Even if one could fit twice as many 16-bit values in a cache as 32-bit values (the normal situation where 16-bit values would be faster than 32-bit values on 32-bit systems), the need to have every write preceded by a read would negate any speed advantage that could produce unless a data structure was read far more often than it was written. On such platforms, a type like int_fast16_t would likely be 32 bits.

That having been said, the Standard does not unfortunately allow what would be the most helpful semantics for a compiler, which would be to allow variables of type int_fast16_t whose address is not taken to arbitrarily behave as 16-bit types or larger types, depending upon what is convenient. Consider, for example, the method:

int32_t blah(int32_t x)
{
  int_fast16_t y = x;
  return y;
}

On many platforms, 16-bit integers stored in memory can often be manipulated just as those stored in registers, but there are no instructions to perform 16-bit operations on registers. If an int_fast16_t variable stored in memory are only capable of holding -32768 to +32767, that same restriction would apply to int_fast16_t variables stored in registers. Since coercing oversized values into signed integer types too small to hold them is implementation-defined behavior, that would compel the above code to add instructions to sign-extend the lower 16 bits of x before returning it; if the Standard allowed for such a type, a flexible "at least 16 bits, but more if convenient" type could eliminate the need for such instructions.

supercat
  • 77,689
  • 9
  • 166
  • 211
1

An example of how the two types might be different: suppose there’s an architecture where 8-bit, 16-bit, 32-bit and 64-bit arithmetic are equally fast. (The i386 comes close.) Then, the implementer might use a LLP64 model, or better yet allow the programmer to choose between ILP64, LP64 and LLP64, since there’s a lot of code out there that assumes long is exactly 32 bits, and that sizeof(int) <= sizeof(void*) <= sizeof(long). Any 64-bit implementation must violate at least one of these assumptions.

In that case, int would probably be 32 bits wide, because that will break the least code from other systems, but uint_fast16_t could still be 16 bits wide, saving space.

Davislor
  • 14,674
  • 2
  • 34
  • 49