1

N3797::3.9.1/1 [basic.fundamental] says

For unsigned narrow character types, all possible bit patterns of the value representation represent numbers.

That's a bit unclear for me. We have the following ranges for narrow character types:

unsigned char := 0 -- 255
signed char : = -128 -- 127

For both unsgined char and signed char objects we have one-to-one mapping from the set of bits in these object representation to the integral value they could represent. The Standard says N3797::3.9.1/1 [basic.fundamental]

These requirements do not hold for other types.

Why the requirement I cited doesn't hold say for signed char type?

2 Answers2

3

Signed types can use one of three representations: two's complement, one's complement, or sign-magnitude. The last two each have one bit pattern (the negation of zero) which doesn't represent a number.

Two's complement is more or less universal for integer types these days; but the language still allows for the others.

Community
  • 1
  • 1
Mike Seymour
  • 249,747
  • 28
  • 448
  • 644
  • +1 *so* much easier to read than what I was trying (horribly) to say. – WhozCraig Oct 08 '14 at 04:42
  • You mean the bit representing sign doesn't participate in a number representation? –  Oct 08 '14 at 04:47
  • @DmitryFucintv: The sign bit does participate. I mean there's one bit pattern, obtained by negating zero (setting all the bits in one's complement, or setting just the sign bit in sign-magnitude) which doesn't represent a number. – Mike Seymour Oct 08 '14 at 04:49
  • Ah, two ways to represent zero... indeed, thank you. –  Oct 08 '14 at 04:54
  • Nicely explained, but (unfortunately) largely wrong. Yes, a signed magnitude or 1's complement representation can have a negative zero, but that's pretty much irrelevant. It would only become relevant *if* they used negative zero as a trap representation. – Jerry Coffin Oct 08 '14 at 04:58
  • @JerryCoffin: It's relevant in that not applying the requirements to `signed char` *allows* negative zero to be reserved as a trap value. Are you saying there's some other reason why negative zero must represent a number? – Mike Seymour Oct 08 '14 at 05:13
  • I'm saying that it's the trap representation that really matters, and negative zero is (at most) a way of producing one. It's also, however, entirely possible to create trap representations even with 2's complement where you don't have a negative zero. – Jerry Coffin Oct 08 '14 at 05:49
1

A few machines have what are called "trap representations". This means (for example) that an int can contain an extra bit (or more than one) to signify whether it has been initialized or not.

If you try to read an int with that bit saying it hasn't been initialized, it can trigger some sort of trap/exception/fault that (for example) immediately shuts down your program with some sort of error message. Any time you write a value to the int, that trap representation is cleared, so reading from it can/will work.

So basically, when your program starts, it initializes all your ints to such trap representations. If you try to read from an uninitialized variable, the hardware will catch it immediately and give you an error message.

The standard mandates that for unsigned char, no such trap representation is possible--all the bits of an unsigned char must be "visible"--they must form part of the value. That means none of them can be hidden; no pattern of bits you put into an unsigned char can form a trap representation (or anything similar). Any bits you put into unsigned char must simply form some value.

Any other type, however, can have trap representations. If, for example, you take some (more or less) arbitrarily chosen 8 bits out of some other type, and read them as an unsigned char, they'll always form a value you can read, write to a file, etc. If, however, you attempt to read them as any other type (signed char, unsigned int, etc.) it's allowable for it to form a trap representation, and attempting to do anything with it can give undefined behavior.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111