Let a
be a variable of a signed integer type T
and be U
a corresponding unsigned type. The expression (U)a
yields a value corresponding to the two's complement representation of the value of a
as U
. I want to know if the following is guaranteed by the C standard to undo that cast. Be u
of type U
and have the value of (U)a
. Be MAX
the maximum value the type T
can hold. (Be aware of the implicit conversions to unsigned types and the fact that every positive value of a signed variable stays unchanged by these conversions.)
Firstly, suppose, T
is able to hold the result:
T convert_2scomplement_to_T(U n) {
return n<=MAX ? n : -(T)(U)-n;
}
Secondly, suppose, the function should detect such an invalid argument; be MIN
the minimum value T
can hold:
T convert_2scomplement_to_T_checked(U n) {
if(n <= MAX) return n;
if( !(n & (U)1 << sizeof(U)*CHAR_BIT-1) ) { // (*)
// invalid argument, the value is positive and `T' cannot hold it
}
/* `n' represents something negative if we're here. */
if(-n < MIN) {
// invalid argument, the value is negative and `T' cannot hold it
}
return -(T)(U)-n;
}
The line marked with // (*)
is not strictly conforming, as far as I can tell, because the standard don't make any guarantees about the position of the sign bit.
Do the described functions work as expected? And is the check for the sign bit avoidable in strictly conforming code?
(And besides the language-lawyer thing… it would be great, if someone who has written code knowing of at least one person having used it on a platform not using two's complement, could leave a comment, what machine this was. Wikipedia mentions
signed magnitude:
one's complement:
- https://en.wikipedia.org/wiki/PDP-1
- https://en.wikipedia.org/wiki/CDC_160_series
- https://en.wikipedia.org/wiki/UNIVAC_1100/2200_series
but this doesn't seem to be anything to worry about in programmes written today. Is there a reason the standard still addresses such machines?)