Sign is a concept that we lay on top of bit-patterns. Bit-wise not (~) concerns only the bit-pattern and not the sign of the value. The result of notting the signed an unsigned value is identical.
Having said that looking at the C standard: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf (draft version available for free). Section 6.3.1.4, page 51:
If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. (58) All other types are unchanged by the integer promotions.
I take this to mean, char and short types will be promoted to an int (or unsigned int depending on the size) when we're actually operating on them. This makes sense because we want the operations to be done as quickly as possibly, so we should target the native size of the machine.
Given this we can see what is actually happening. The machine will perform all operations with an 'int' size, because both the operands to '==' and '~' can fit in an int field which I'm assuming in your machine is 32 bits.
Now the first thing to look at is the value of 'a'. We take 0, we not it and get 0xFFFFFFFF. We assign this to a uint16_t value and get 0xFFFF. When we're ready to do the comparison we load the 0xFFFF, realize the value is unsigned and zero extend it to 0x0000FFFF. For the value of value of 'b', everything is the same except when we read the 0xFFFF for the comparison we sign extend it to 0xFFFFFFFF. Now for your cases:
- Notting the zero gives 0xFFFFFFFF and that compared to 0x0000FFFF will fail.
- We took our 0xFFFFFFFF, chopped it to 0xFFFF and then zero extended it to 0x0000FFFF, giving the same value as 'a'.
And so on.