2

Context:

This is a followup to that other question of mine. I asked about both C and C++ and soon got an answer about C++ because last draft for C++20 explicitly requires that signed integer types use two's complement and that padding bits (if any) cannot give trap representations. Unfortunately this is not true for C.

Of course, I know that most modern system only use 2-complement representations of integers and no padding bits, meaning that no trap representation can be observed. Nevertheless the C standard seem to still allow for 3 representations of signed types: sign and magnitude, one's complement and two's complement. And at least C18 draft (n2310 6.2.6 Representations of types explicitly allows padding bits for integer types other that char. This is still true for the latest version (n2454) I could find

Question

So in the context of possible padding bits, or non two's complement signed representation, int variables could contain trap values for conformant implementations. Is there a reliable way to make sure that an int variable contains a valid value?

phuclv
  • 37,963
  • 15
  • 156
  • 475
Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252
  • 2
    You can set an `int` to 1, 2, 4, and so on up to near its maximum and examine the object representation through a pointer to `char`. This will identify the value bits, except some additional logic may be needed to distinguish them from parity/padding bits that happen to be set at times. The sign bit can be detected similarly. Knowing which bits are which, you can then examine the object representation of a candidate and detect whether any of its padding bits are on. Unfortunately, this will not tell you whether those padding bits denote a trap representation. – Eric Postpischil Jan 07 '20 at 12:22
  • I don't think you can check this. Trap bit doesn't even have to be present in object representation of an int object, so you can't access it through `char` lvalue. – Language Lawyer Jan 08 '20 at 21:26

0 Answers0