8

If sizeof(int) == sizeof(long), then is INT_MIN == LONG_MIN && INT_MAX == LONG_MAX always true?

Is there any real existing cases demonstrating "not true"?

UPD. The similar question: Is there any hosted C implementations which have CHAR_BIT > 8?.

pmor
  • 5,392
  • 4
  • 17
  • 36
  • 1
    One could imagine a computer and a mad compiler writer that used 2's complement for ints and sign-magnitude for longs. But no real-world examples. – stark Oct 19 '21 at 13:14
  • 1
    Just curious, why do you want to know if you can rely on that? – Stefan Riedel Oct 19 '21 at 13:31
  • 1
    @stark Apparently, ones-complement and sign-and-magnitude representations of signed integers are going to be abandoned in the next version of the C standard. – Ian Abbott Oct 19 '21 at 13:32
  • 6
    [*The New C Standard*](http://www.coding-guidelines.com/cbook/cbook1_2.pdf) on page 594 says there were Cray implementations where `short` was a 32-bit type occupying 64 bits of space. In that case, it might have had `sizeof(short) == sizeof(int)` but `SHORT_MAX < INT_MAX`. – Nate Eldredge Oct 19 '21 at 13:45
  • I think it's a safe assumption on any hosted implementation, but that's not a guarantee. C allows implementations to do weird things with type sizes and representations, and there's always some oddball, niche architecture that has to do things differently. – John Bode Oct 19 '21 at 13:45
  • @NateEldredge wonder how many of those "cray implementations" are still in use... – Antti Haapala -- Слава Україні Oct 20 '21 at 15:05

1 Answers1

6

It need not be true. C11 6.2.6.2p2:

  1. For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits.There shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M <= N ). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:

    • the corresponding value with sign bit 0 is negated (sign and magnitude);
    • the sign bit has the value -(2M) (two's complement);
    • the sign bit has the value -(2M- 1) (ones' complement).

    Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones' complement), is a trap representation or a normal value. In the case of sign and magnitude and ones' complement, if this representation is a normal value it is called a negative zero.


Now, the question is "is there any implementation that has different amount of padding bits" or, even as stark mentioned, different representations for different types of integers - it is very hard to prove that there is no such implementation currently in use. But I believe it is very unlikely that one would come across a system like this in real life.