One major difficulty with the evolution of the C standard is that by the time efforts were made to standardize the language, there were not only implementations that did certain things differently from each other, but there was a significant body of code written for those implementations which relied upon those behavioral differences. Because the creators of the C standard wanted to avoid forbidding implementations from behaving in ways which users of those implementations might rely upon, certain parts of the C standard are a real mess. Some of the worst aspects concern aspects of integer promotion such as the one you've observed.
Conceptually, it would seem that it would make more sense to have unsigned char
should promote to unsigned int
than to signed int
, at least when used as anything other than the right-hand operand of the -
operator. Combinations of other operators may yield large results, but there's no way any operator other than -
could yield a negative result. To see why signed int
was chosen despite the fact that the result can't be negative, consider the following:
int i1; unsigned char b1,b2; unsigned int u1; long l1,l2,l3;
l1 = i1+u1;
l2 = i1+b1;
l3 = i1+(b1+b2);
There's no mechanism in C by which an operation between two different types could yield a type which isn't one of the originals, so the first statement must perform the addition as signed or unsigned; unsigned generally yields slightly less surprising results, especially given that integer literals are by default signed (it would be very weird if adding 1
rather than 1u
to an unsigned value could make it negative). It would be surprising, however, to have the third statement could turn a negative value of i1
into a large unsigned number. Having the first statement above yield an unsigned result but the third statement yield a signed result implies that (b1+b2)
must be signed.
IMHO, the "right" way to resolve signedness-related issues would be to define separate numeric types which had documented "wrapping" behavior (like present unsigned types do), and versus those that should behave as whole numbers, and have the two kinds of types exhibit different promotion rules. Implementations would have to keep supporting existing behavior for code using existing types, but new types could implement rules which were designed to favor usability over compatibility.