Your interpretation is correct.
Going up to paragraph 2 of 6.2.6.2:
For signed integer types, the bits of the object
representation shall be divided into three groups: value bits,
padding bits, and the sign bit. There need not be any
padding bits; signed char shall not have any padding bits.
There shall be exactly one sign bit. Each bit that is a
value bit shall have the same value as the same bit in the
object representation of the corresponding unsigned type (if there are
M value bits in the signed type and N in the unsigned type, then M ≤ N
). If the sign bit is zero, it shall not affect the resulting
value. If the sign bit is one, the value shall be modified
in one of the following ways:
- the corresponding value with sign bit 0 is negated ( sign and magnitude );
- the sign bit has the value − (2M)( two’s complement );
- the sign bit has the value − (2M − 1) ( ones’ complement ).
Which of these applies is implementation-defined, as is whether the
value with sign bit 1 and all value bits zero (for the first
two), or with sign bit and all value bits 1 (for ones’
complement), is a trap representation or a normal value. In
the case of sign and magnitude and ones’ complement, if this
representation is a normal value it is called a negative zero.
This means an implementation using either one's complement or sign and magnitude has, for a given size integer type, a specific representation which must be either negative zero or a trap representation. It's then up to the implementation to choose which one of those applies.
As an example, suppose a system has sign and magnitude representation and a 32 bit int
with no padding. Then the representation that would be negative zero, if it is supported, is 0x80000000
.
Now suppose the following operations are performed:
int x = 0x7fffffff;
x = ~x;
If the implementation supports negative zero, the ~
operator will generate -0
as the result and store it in x
. If it does not, it creates a trap representation and invokes undefined behavior as per paragraph 4.