1

Cppreference.com claims:

If no length modifiers are present, it's guaranteed to have a width of at least 16 bits.

However, the latest standard draft only says:

Plain ints have the natural size suggested by the architecture of the execution environment.

With the footnote only adding that:

int must also be large enough to contain any value in the range [INT_­MIN, INT_­MAX], as defined in the header <climits>.

From these sections of the standards, it seems like int's size is entirely implementation dependent. Where does the "16 bit minimum" guarantee come from?

Dun Peal
  • 16,679
  • 11
  • 33
  • 46
  • 1
    Similar: https://stackoverflow.com/questions/10053113/is-c11s-long-long-really-at-least-64-bits – juanchopanza Jun 19 '18 at 14:38
  • I'd refer this question to cppreference.com. My understanding of paragraphs 6.9.1.1- of C++17 standard is the size of char should be large enough to hold the machine's character set, int should be at least as large as (short which should be at least as large as) char. Paragraph 6.9.1.4 says the size should be in bits. Paragraph 5.3.1 says the character set should have at least 96 specified characters, so I take it char should have at least 7 bits. In C++17, paragraph 4.4.1 says it is least 8 bits. – Uri Raz Mar 19 '22 at 11:41
  • @UriRaz: No, even back in C89 `char` was required to be 8 bits. – MSalters Mar 28 '22 at 07:26
  • @MSalters I didn't say char was required to be 8 bits in C89. Note that paragraph 5.2.1 says the execution character set will include, in the least, close to 100 different characters. As single byte character is made from contiguous sequence of bits, I take it to mean it is required to be at least 7 bits. Feel free to point my mistake, or point me to a contrary statement in the spec. – Uri Raz Mar 28 '22 at 15:14
  • @UriRaz: The "mistake" is that you're overlooking a direct definition; 5.2.4.2.1 Sizes of integer types— number of bits for smallest object that is not a bit-field (byte) CHAR_BIT 8 – MSalters Mar 28 '22 at 15:21
  • @MSalters I didn't say anything about CHAR_BIT, nor assumed it is defined as 8. – Uri Raz Mar 28 '22 at 17:11
  • @UriRaz: In case you misunderstood its definition, `CHAR_BIT` is the number of bits in a char - which you claimed had a minimum of 7. That's why I pointed out that the minimum is in fact one higher. The "96 characters" IIRC is the intersection of ASCII and EBCDIC. – MSalters Mar 29 '22 at 07:15
  • @MSalters Oh, I know what CHAR_BIT is. Note I wrote that in C++17 it had to be at least 8. I don't see any reason why it would be at least that in C89. – Uri Raz Mar 30 '22 at 16:53

1 Answers1

4

The minimum size for int follows from the requirement that INT_MIN shall be no less than -32767 and INT_MAX shall be at least +32767. Note that that's 2^16-1 possible values, which allows for 1's complement with signed zero representation.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • I found the ranges you mention in an older standard - `ISO/IEC 9899`. However, I can't find them anywhere in the latest draft. – Dun Peal Jun 19 '18 at 14:39
  • 2
    @DunPeal: What do you mean by "older standard" ? ISO/IEC 9899:2011 is the current C standard. Note that ISO C++ (14882) explicitly references the ISO C standard, and has always done so. Additionally, each release of the C++ standard also specifically mentions which release of the C standard it references. – MSalters Jun 19 '18 at 14:42
  • I stand corrected. So effectively, the ISO C++ minimum int size is based on the macros `INT_MIN` and `INT_MAX`, as defined by the ISO C standard, upon which the ISO C++ standard explicitly depends. – Dun Peal Jun 19 '18 at 14:46
  • The minimum size of int should be -32768 not -32767. – Honey Yadav Jun 19 '18 at 14:52
  • 1
    @HoneyYadav: Same comment as on your answer: where did you get that from? 5.2.4.2.1 [Sizes of integer types ] says -32767. – MSalters Jun 19 '18 at 14:57
  • 1
    @HoneyYadav -- this gets confusing. On many systems, a 16-bit signed type can hold the value -32768, and that is, indeed, the value of `INT_MIN` when the compiler defines `int` as that 16-bit signed type. But the **standard** does not require that value; it allows an implementation to define `int` to only go as low as -32767, because some other forms of hardware (for example, 1's complement, as mentioned in this answer) cannot represent -32768 in 16 bits. – Pete Becker Jun 19 '18 at 16:33